The EU AI Act: A New Era for Artificial Intelligence in Europe
Artificial Intelligence (AI) undoubtedly offers a multitude of benefits and opportunities for our society. With AI, we can achieve better healthcare, safer transportation, more efficient manufacturing, and cheaper, more sustainable energy supply. However, the disadvantages and challenges associated with the use of AI must not be overlooked. From data protection and security concerns to issues of discrimination, bias, and ethical questions, there are many aspects that need to be considered in the development and operation of AI systems.
- 17. October. 2024

Philipp Nöhrer
AI Legal & Compliance Experte
To leverage the benefits and mitigate the drawbacks of AI, the European Commission made the world’s first attempt to regulate AI in April 2021 by publishing a proposal for an AI law (the so-called “EU AI Act”) (COM [2021] 206 final). The goal of the regulation is to create “harmonized rules for the marketing, operation, and use of AI systems” within the EU. Since 2021, not only has the technology evolved (as recently demonstrated by ChatGPT), but the legal text is also taking shape.
Most recently, in June 2023, members of the European Parliament overwhelmingly approved their position on the EU AI Act. Through their amendments to the Commission’s proposal, they aim to ensure that AI systems are developed by humans and are safe, transparent, accountable, non-discriminatory, and environmentally friendly. This marks the beginning of the trilogue negotiations for the final version of the regulation.
The following article outlines what the EU AI Act understands by AI systems and highlights the obligations for these systems. It will also take a critical look at the latest developments and the relationship with the GDPR before concluding with a summary.
(Un)Clear Definition of “AI Systems”?
In the legal context, legal definitions are used to explain legal terms and establish their meaning. Definitions serve primarily to avoid ambiguities in the application of laws, assist in interpretation, and ensure uniform interpretation.
The EU AI Act applies to AI systems that are marketed, operated, or used within the EU. Thus, the scope of the EU AI Act is limited to the application of AI systems. For reasons of legal certainty, it is important that a clear and binding definition exists.
Hinsichtlich der Definition However, there are different proposals regarding the definition of the term “AI systems”:
- According to the Commission’s proposal from April 2021, AI systems are defined as systems developed using specific techniques and concepts (such as machine learning, deep learning, expert systems, statistical approaches, etc.) with goals set by humans. Annex I specifies the techniques and concepts. This annex has faced criticism because the catalog covers very different concepts, is not clearly defined, and would thus open a wide scope for application.
- In contrast, the European Parliament formulated the definition in its compromise text to be technology-neutral. According to this, AI systems are understood to be autonomous systems that influence their physical or virtual environment with their results. The European Parliament aligns with the definition used by the OECD.
All proposals share the commonality that AI systems produce results (such as predictions, recommendations, or decisions) that can affect the interacting environment. This influence on the environment includes, for example, controlling a machine, algorithmic trading, or the impact of a recommendation from the AI system on a human decision-maker.
Overall, the Commission’s definition is broader than that of the Parliament. However, systems will soon be classified as AI systems, as they typically deliver some form of prediction, recommendation, or decision that influences their environment by prompting a specific action (or not).
What Requirements Does the EU AI Act Impose on AI Systems?
Once a system is classified as an AI system, the next step is to determine what requirements the EU AI Act imposes on it.
The EU AI Act follows a risk-based approach. This means that the intensity of the requirements adjusts to the risk posed by the specific AI system. The greater and more likely the potential harm or risk, the stricter the regulatory framework for the AI system.
The EU AI Act distinguishes between four risk levels:
1.    Prohibited Systems with Unacceptable Risk
2.    High-Risk AI Systems
3.    Low-Risk AI Systems
4.    AI Systems that Pose a Risk in Specific Cases (Minimal Risk)
The term “risk” is not defined in the EU AI Act. According to traditional understanding, risk describes the combination of the likelihood of a hazard occurring that causes harm and the severity of that harm.
Depending on the classification of the AI system into a risk category, the EU AI Act attaches different compliance and information obligations:
Wie die Tabelle zeigt, fokussiert sich das Regelungssystem klar auf Hochrisiko-KI-Systeme, weshalb diese Risikostufe die meisten Anforderungen des EU AI Acts treffen.
Auffällig ist auch, dass KI-Systeme, deren Risiko als minimal eingeschätzt wird, grundsätzlich freien Zugang zum europäischen Markt haben. D.h. KI-Systeme mit minimalem Risiko unterliegen keinen zusätzlichen Pflichten, die über bereits bestehende Rechtsvorschriften hinausgehen.
Latest Developments
Generative AI Systems
The Slovenian presidency has proposed another category of AI systems—purpose-open systems, also known as generative AI systems, “general-purpose” systems, or “foundation models.” These systems can perform general core competencies such as image and speech recognition, generation of audio and video files, pattern recognition, answering questions, or translations. Thus, a wide range of applications is available with these AI systems, on which further training processes and software development can build.
It remains uncertain whether these generative AI systems will also be subject to the obligations of the EU AI Act. Members of the European Parliament have suggested that these systems should be subject to nearly the same obligations as high-risk AI systems. The Parliament demands that generative models like ChatGPT meet additional transparency requirements, such as disclosing that the content was generated by AI. Additionally, these models should be designed to ensure that no illegal content is generated and that the training data used is published.
This proposed amendment has faced criticism: the stringent regulation of foundation models, regardless of the use case, could impose disproportionate compliance costs on developers of such systems, potentially hindering development. It has also been pointed out that developers of foundation models could be held accountable for non-compliance, with no distinction made between commercial and open-source software. Since it cannot be completely ruled out that these systems might later be used for harmful purposes, this regulation could lead to increased risk for providers of such models.
Innovation Encouragement
To promote innovation in the field of AI, members of the European Parliament have developed exceptions for research activities and AI components under open-source licenses. The proposals encourage real laboratories (so-called “regulatory sandboxes”) or controlled environments established by public authorities to test AI before its deployment.
Complaint Rights for Affected Individuals
Additionally, Members of the European Parliament aim to strengthen the rights of affected citizens by establishing complaint mechanisms regarding AI systems and ensuring that they receive explanations for decisions made by these systems.
Relationship Between the EU AI Act and the GDPR
The EU AI Act is intended to complement the GDPR. The scopes of both EU legal acts are clearly distinct: while the GDPR regulates the processing of personal data, the EU AI Act governs the use of AI systems. Additionally, the EU AI Act categorizes the datasets used in connection with AI according to their specific purposes in training, validation, and testing data.
For high-risk AI systems, the EU AI Act sets clear requirements for data governance (see Article 10 of the EU AI Act). Data governance includes selecting suitable and qualitative datasets and identifying potential data gaps. In general, data should be representative, complete, accurate, available, sufficient, and suitable. An essential element of data governance also involves avoiding potential biases and discrimination. These requirements aim to ensure the quality of the datasets while reducing the risk of discrimination due to inadequately chosen data.
Regarding the processing of personal data, the EU AI Act does not currently impose new requirements—rather, the relevant provisions of the GDPR must be followed.
Conclusion
Regulating AI is an appropriate tool to ensure the safety, robustness, data protection, compliance, and ethics of AI systems. All these measures contribute to increasing trust and acceptance of AI systems. However, regulating AI is a complex task that requires a balanced consideration of technological development, social impacts, and legal aspects.
At Leftshift One, we generally welcome AI regulation. The content of the EU AI Act is moving in the right direction in many areas and is already being implemented by us today. Transparency, accountability, trustworthiness, and data sovereignty have always been core components of our philosophy and are proactively implemented in our customer projects. However, we would appreciate further clarifications regarding the definition of AI systems. It is crucial that generative AI systems do not face overregulation, as development in this area should not be slowed down.
With the positive vote in the EU Parliament, negotiations with EU member states in the Council and the Commission regarding the final shaping of the regulation text have now begun. A consensus is expected to be reached by the end of 2023. We will continue to closely monitor these developments.
In the next article, we will discuss how we at Leftshift One are already implementing the core aspects of the EU AI Act today in this exciting topic!
On July 26, our next AI Legal Talk will take place, where we will address the most common questions surrounding AI projects. More information and registration details can be found here Anmeldung.
References
- EEuropean Commission, A European Approach to Artificial Intelligence, available at https://digital-strategy.ec.europa.eu/de/policies/european-approach-artificial-intelligence (as of June 27, 2023)
- European Parliament, AI Law: A Step Closer to First Rules for Artificial Intelligence (May 11, 2023), available at https://www.europarl.europa.eu/news/de/press-room/20230505IPR84904/ki-gesetz-ein-schritt-naher-an-ersten-regeln-fur-kunstliche-intelligenz (as of June 27, 2023)
Philipp Nöhrer has been the in-house AI Legal and Compliance Expert at Leftshift One for 4 years. His expertise and publications include:
- HTL HTL Kaindorf for Computer Science
- Diploma in Law and Master’s in IT Law & Management
- Evaluating the trustworthiness of AI applications – Lessons learned from an auditÂ
Take advantage of generative AI in your business now.
To process your request, we will handle the data you provide in the form. Thank you for filling it out!