"Trust Your AI": Focusing on Trustworthiness
The goal of a workshop by Know-Center, Leftshift One, and SGS is to create an ethical certification framework for Artificial Intelligence.
The use of Artificial Intelligence (AI) shifts decision-making responsibility to machines, which can lead to potential risks. However, many companies find it challenging to evaluate and control the risks associated with the still-unproven use of AI. A workshop hosted by Know-Center in collaboration with Leftshift One and SGS, focusing on Trustworthy AI, aims to address this issue by providing approaches for risk assessment and risk mitigation as part of the certification of AI applications.
- 17. October. 2024
Christian Weber
CTO, AI Expert & Founder
Karin Schnedlitz
Content Managerin
Definition
What exactly is meant by "Trustworthy AI"?
AI risks such as bias, unexplained outcomes (“black box”), lack of robustness against malicious attacks, and ethical issues are well-known concerns. For companies and AI developers, challenges include providing valid training data and implementing safety measures to ensure that the AI operates within defined boundaries. Key considerations for businesses when implementing AI applications include security, transparency, control measures, human intervention in the application, and data protection.
Under the motto “Trustworthy AI,” efforts are made to summarize the requirements for AI applications. Ensuring the trustworthiness of AI applications involves important requirements such as the explainability of AI decisions and the robustness of the system. The High-Level Expert Group on Artificial Intelligence (HLEG-AI) established by the European Union developede “Ethics Guidelines for Trustworthy AI”. In this context, seven requirements for trustworthy AI were defined:
- Priority of human agency and oversight
- Technical robustness and safety
- Data protection and data quality management
- Transparency
- Diversity, non-discrimination
- Social and environmental well-being
- Accountability
Ethical Guidelines
Certification builds trust—in the truest sense.
Dimension: Fairness
Subtitle
Requirements: The AI application must not lead to unjustified discrimination, for example, through unbalanced training data (keyword "bias" or underrepresentation).
Dimension: Autonomy & Control
Subtitle
The autonomy of the AI application and the human must be ensured, such as through "Human-in-the-Loop."
Dimension: Transparency
Subtitle
The AI application must provide traceable, reproducible, and explainable decisions.
Dimension: Reliability
Subtitle
The AI application must be reliable, meaning it should be robust and provide consistent outputs under varying input data.
Dimension: Sicherheit
Subtitle
The AI application must be secure, ensuring protection against adversarial attacks and safeguarding sensitive data.
Dimension: Data protection
Subtitle
The AI application must protect sensitive data, such as personal information or trade secrets.
Ethical guidelines
The evaluation framework adopts a risk-based approach, where potential risks are examined from various perspectives. This means that for each dimension being assessed, a needs analysis must be conducted to analyze the impact on affected individuals or the environment.
If the protection needs of a dimension are assessed as low risk, that dimension does not require further examination. Instead, it must be justified why the AI application’s risk regarding that dimension is considered low. However, if the needs analysis indicates that the AI application poses a medium or high risk, a detailed risk analysis must be performed along the identified risk areas. The documented responses for each dimension form the basis for certification.
Collaboration between Research and Production
Request
a free Demo
 Bring the advantages of AI-powered applications into your organization. Inquire now for a non-binding personal consultation with our AI experts.