New AI-Service: Disover our Small LLM GPT Model

Knowledge

"Trust Your AI": Focusing on Trustworthiness

The goal of a workshop by Know-Center, Leftshift One, and SGS is to create an ethical certification framework for Artificial Intelligence.

The use of Artificial Intelligence (AI) shifts decision-making responsibility to machines, which can lead to potential risks. However, many companies find it challenging to evaluate and control the risks associated with the still-unproven use of AI. A workshop hosted by Know-Center in collaboration with Leftshift One and SGS, focusing on Trustworthy AI, aims to address this issue by providing approaches for risk assessment and risk mitigation as part of the certification of AI applications.

Christian Weber

CTO, AI Expert & Founder

Karin Schnedlitz

Content Managerin

Definition

What exactly is meant by "Trustworthy AI"?

AI risks such as bias, unexplained outcomes (“black box”), lack of robustness against malicious attacks, and ethical issues are well-known concerns. For companies and AI developers, challenges include providing valid training data and implementing safety measures to ensure that the AI operates within defined boundaries. Key considerations for businesses when implementing AI applications include security, transparency, control measures, human intervention in the application, and data protection.

Under the motto “Trustworthy AI,” efforts are made to summarize the requirements for AI applications. Ensuring the trustworthiness of AI applications involves important requirements such as the explainability of AI decisions and the robustness of the system. The High-Level Expert Group on Artificial Intelligence (HLEG-AI) established by the European Union developede “Ethics Guidelines for Trustworthy AI”. In this context, seven requirements for trustworthy AI were defined:

  • Priority of human agency and oversight
  • Technical robustness and safety
  • Data protection and data quality management
  • Transparency
  • Diversity, non-discrimination
  • Social and environmental well-being
  • Accountability

Ethical Guidelines

Certification builds trust—in the truest sense.

However, the requirements of the ethical guidelines are not binding; they merely serve as recommendations. To enhance trust in AI applications, several institutions (such as DIN, BSI, Fraunhofer, and PwC, to name a few) are already working on certifications for AI applications. The certification aims to establish quality standards for “AI Made in Europe” and ensure responsible use of AI applications. A preliminary — and quite comprehensive — evaluation catalog has been published by the Fraunhofer Institute. The AI evaluation catalog divides trustworthy AI applications into six dimensions.

Dimension: Fairness

Subtitle

Requirements: The AI application must not lead to unjustified discrimination, for example, through unbalanced training data (keyword "bias" or underrepresentation).

Dimension: Autonomy & Control

Subtitle

The autonomy of the AI application and the human must be ensured, such as through "Human-in-the-Loop."

Dimension: Transparency

Subtitle

The AI application must provide traceable, reproducible, and explainable decisions.

Dimension: Reliability

Subtitle

The AI application must be reliable, meaning it should be robust and provide consistent outputs under varying input data.

Dimension: Sicherheit

Subtitle

The AI application must be secure, ensuring protection against adversarial attacks and safeguarding sensitive data.

Dimension: Data protection

Subtitle

The AI application must protect sensitive data, such as personal information or trade secrets.

Ethical guidelines

The evaluation framework adopts a risk-based approach, where potential risks are examined from various perspectives. This means that for each dimension being assessed, a needs analysis must be conducted to analyze the impact on affected individuals or the environment.

If the protection needs of a dimension are assessed as low risk, that dimension does not require further examination. Instead, it must be justified why the AI application’s risk regarding that dimension is considered low. However, if the needs analysis indicates that the AI application poses a medium or high risk, a detailed risk analysis must be performed along the identified risk areas. The documented responses for each dimension form the basis for certification.

Collaboration between Research and Production

The way certification in general and the documentation of various requirements, in particular, must be structured is the subject of a current research project. In this context, Leftshift One is collaborating with Know-Center and SGS to go through the AI audit catalog from the Fraunhofer Institute based on a practical AI application. This research project aims to provide insights into whether the audit catalog is sufficient for certifying AI applications or if it needs to be adapted.
Now, let's step into the future!

Request
a free Demo

 Bring the advantages of AI-powered applications into your organization. Inquire now for a non-binding personal consultation with our AI experts.

Scroll to Top