The EU AI Act: It's Here Now.
To leverage the benefits and reduce the drawbacks of Artificial Intelligence (AI), the European Commission took the world’s first attempt to regulate AI in April 2021 by publishing a proposal for an AI law (the so-called “AI Act”) (COM [2021] 206 final).
About a year ago, we reported on the AI Act, which was then in trilogue negotiations. Since then, significant progress has been made: the law was finalized in May 2024 and officially published in the European Official Journal in July 2024 (Regulation [EU] 2024/1689, OJ L 2024/1689). With this, the AI Act has been adopted and will now be implemented in phases. The EU has thus established clear rules for AI as the first continent.
We would like to take this milestone in AI regulation as an opportunity to provide an update. We will highlight the key points of the AI Act and give an insight into how we will implement the now established requirements at MyGPT.
- 17. October. 2024
Key Points of the AI Act
The AI Act aims to create a clear legal framework for the development and deployment of AI systems within the EU. It pursues several main objectives, including ensuring the safety and fundamental rights of citizens and fostering trust in AI systems. As a regulation, the AI Act applies uniformly across the entire EU.
Definition of AI System
The AI Act is based on the OECD definition of AI, which defines an AI system under Article 3 (1) of the AI Act as a machine-supported system that:
- is designed for varying degrees of autonomous operation,
- can be adaptable after it is put into operation,
- derives outputs, such as predictions, content, recommendations, or decisions, based on received inputs for explicit or implicit purposes, which may influence physical or virtual environments.
Risk-Based Approach
The AI Act adopts a risk-based approach to introduce a proportionate and effective regulatory framework for AI systems. AI systems are categorized based on their risk potential, with specific regulations applicable to each category. This means that the higher the risk, the more requirements must be met.
The AI Act defines risk as “the combination of the likelihood of harm occurring and the severity of that harm” (Article 3 (2) of the AI Act) and particularly assesses potential harm to individual or public interests (health, safety, fundamental rights, including democracy, the rule of law, and environmental protection).
Accordingly, AI systems are classified into four risk categories:
Risk Categories | Description | Risk |
---|---|---|
AI Systems with Minimal Risk | AI systems that are permitted without specific obligations under the AI Act. Examples include spam filters or video games. Compliance with codes of conduct (Code of Practices) is encouraged, but participation is voluntary. | low |
AI Systems with Limited Risk | They are permissible but must meet certain transparency obligations (see Article 50 of the AI Act). This includes AI systems designed for interaction with natural persons (e.g., chatbots) or for generating text, audio, images, or sound. | mid |
AI Systems with High Risk | High-risk AI systems (Article 6 et seq. of the AI Act) are permissible but must meet the most requirements according to the AI Act (such as the introduction of a risk management system, adherence to data governance, technical documentation, record-keeping obligations, etc.). Such AI systems are listed in Annexes I and III of the AI Act. High-risk areas include AI systems in the fields of biometrics, critical infrastructure, access to vocational education and training, or personnel management. | high |
AI Systems with Unacceptable Risk | AI systems with unacceptable risk (prohibited AI practices, Article 5 of the AI Act) are generally prohibited. This includes AI systems that manipulate human behavior or exploit vulnerabilities, as well as "social scoring" and "predictive policing." | unacceptable |
In addition, the AI Act introduces specific regulations for general-purpose AI models (see Articles 51 to 56 of the AI Act). This category particularly includes foundational LLM models like GPT from OpenAI or Gemini from Google. They are subject to documentation and transparency requirements. Highly effective general-purpose AI systems that pose systemic risks must undergo a thorough evaluation process.
Role Distribution
Depending on the role assignment, the AI Act imposes different requirements:
The most heavily regulated actors under the AI Act are the providers of AI systems, i.e., entities that develop or have developed an AI system to market or operate it under their own name or brand (Article 3 (3) of the AI Act).
Finally, the AI Act also imposes certain obligations on the operators of AI systems. An operator is defined as someone who uses an AI system under their own responsibility, unless the AI system is used in a personal and non-professional capacity (Article 3 (4) of the AI Act).
Do you already know about our KI>Inside Podcast?
The podcast not only provides exciting insights into the mechanisms and potentials of AI but also highlights what it really takes to make an AI project successful.
Entry into Force
As of August 2, 2024, the AI Act is officially in force. However, the individual regulations will come into effect gradually, meaning that the provisions will apply at different times. The timeline is as follows:
- February 2, 2025 (+6 months): Prohibited AI practices must no longer be applied. Additionally, companies must ensure that their employees possess basic AI competencies (Article 4 of the AI Act).
- August 2, 2025 (+12 months): Specific regulations for general-purpose AI and governance structures in the form of notification bodies and authorities will come into effect. Furthermore, from this date, the penalty provisions will apply.
- August 2, 2026 (+24 months): High-risk AI systems (Annex III) and AI systems with low and minimal risk must comply with the requirements of the AI Act.
- August 2, 2027 (+36 months): Full compliance with the regulations for high-risk AI systems (Annex I) is required.
MyGPT meets the criteria for classification as a limited-risk system.
Implementation at MyGPT
Risk Classification
MyGPT is an AI system in the sense of the AI Act: it is a machine-supported system that generates content (such as providing information, recommendations, other text-based results, etc.) in response to the submitted prompt. The generated responses can influence the user’s environment, particularly if the information provided leads the user to take action.
Since MyGPT is not classified as a high-risk AI system, it meets the criteria of a limited-risk AI system as it interacts with natural persons. MyGPT is designed to communicate through dialogue with human users, similar to a chatbot.
Consideration and Implementation of the AI Act
When a company uses an AI system like MyGPT, it acts as the operator in the sense of the AI Act. This means that the company has certain obligations to fulfill. Although MyGPT uses an LLM component, which qualifies as a general-purpose AI model, the obligations related to such AI models apply only to the developer (e.g., OpenAI, Microsoft, Google, etc.). Therefore, the deploying company, as the operator, does not have specific requirements in this regard.
Since MyGPT is a limited-risk AI system and is used in everyday business operations, certain aspects need to be considered. The following points aim to help the deploying company understand and comply with the requirements of the AI Act:
- Compliance with Transparency Obligations: Operators must ensure that users are informed they are interacting with an AI system unless this is obvious (Article 50 (1) of the AI Act). The information must be provided in a clear and unambiguous manner at the latest during the first interaction—this responsibility falls on the deploying company. For example, MyGPT includes a note in the information section.
- AI Competence: Operators of AI systems must take measures to ensure that their personnel possess adequate AI competence. This includes considering technical knowledge, experience, training, and the context of use (Article 4 of the AI Act). At Leftshift One, we have a user onboarding process that addresses these topics. Successful completion of the user onboarding can be confirmed through a participation certificate, which serves as a measure regarding AI competence.
- Review and Testing: The functionality of the AI system should be regularly checked for performance, accuracy, and reliability. Errors, bugs, etc., should be resolved. User feedback should also be incorporated to improve MyGPT.
Summary
The adoption of the AI Act marks a milestone in the regulation of AI. As a company that has engaged with regulatory requirements early on, we at Leftshift One are well-positioned to implement the now established requirements.
We continue to view the challenges posed by the regulation as an opportunity to strengthen our position as a responsible AI provider and further build the trust of our customers.
We will also continue to inform you about important developments and our progress in implementing the AI Act in the future.
Â
References:
European Commission, A European Approach to Artificial Intelligence, available at https://digital-strategy.ec.europa.eu/de/policies/european-approach-artificial-intelligence (as of September 10, 2024)
Broadcast and Telecommunications Regulatory Authority, available at https://www.rtr.at/rtr/service/ki-servicestelle/ai-act/AI_Act.de.html (as of September 10, 2024).
Disclaimer: DThis article provides general information and has been compiled to the best of our knowledge and belief. However, no guarantee can be given for the accuracy, completeness, or timeliness of the content. The article is for informational purposes only and should not be interpreted as legal advice, nor can it replace such advice. The responsibility for actions taken based on this article lies solely with the user. Liability for damages or consequences arising from the use of the information is excluded.
Curious about MyGPT? Schedule your free initial consultation!