Article
Political agreement on the EU AI Act
Discover key insights and how to navigate the implications of the EU AI Act for your organization
The European Parliament and the EU's Council of Ministers have recently reached a political agreement on the EU AI Act, marking a pivotal moment in the regulation of artificial intelligence. Finally, it’s about time that the AI Act is moving forward. You can start thinking what this means for your organization.
Explore Content
- What does the AI Act entail?
- Who in your organization is going to deal with these new rules?
- Next steps for the AI Act
On 9 December 2023, the European Parliament and the EU’s Council of Ministers reached a political agreement on the EU AI Act. This is an important step towards both a landmark as well as an ambitious regulation aiming to regulate artificial intelligence, described by the EU legislator as a fast evolving ‘family of technologies’. With the end (finally) in sight, organizations working and planning to work with AI systems can start preparing for the AI Act. In this blog, we will explain how they can do this.
What does the AI Act entail?
Before getting into the ‘who’, let’s briefly take one step back: what does the AI Act entail? Although the final text is not released yet, we’ve identified some key characteristics of the AI Act, partly based on the recently published (very insightful) Q&A from the European Commission.
- The AI Act is heavily inspired by the GDPR with similar features such as principles, user rights, transparency obligations and self-assessments.
- The AI Act is broad in scope:
(1) having a potential global reach as it also applies to providers of AI systems that are established outside the EU but place their system on the market in the EU;
(2) using a (very) broad definition of AI system. The latest and expected definition is: “An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment.”
(3) introducing requirements and obligations for stakeholders through the supply chain: providers, importers, distributors and deployers (users).
- The AI Act uses a risk-based approach, which means that requirements and obligations apply depending on the level of risk of the (use of the) AI system:
(1) Unacceptable risk: e.g. social scoring and biometric categorization
(2) High-risk: e.g. AI systems that include safety components of products covered by sectorial Union legislation, AI systems that are described in specific use cases as in the sphere of employment or education;
(3) Specific transparency risk: for chatbots and deep fakes; and
(4) Minimal risk: for which providers of such systems may choose to adhere to voluntary codes of conduct.
The classification of high-risk is based on the intended purpose and function of the AI system, in line with the existing product safety legislation. Some of the key obligations for high-risk AI systems include conformity assessments, quality and risk management systems, registration in a public EU database, and information access by authorities. In addition, high-risk systems must be technically robust and must minimize the risk of unfair biases.
- As for general-purpose AI models (including generative AI), the AI Act requires providers of such models to disclose information to downstream system providers, and to have policies in place to ensure that they respect copyright when training their models. In addition, for general purpose AI models that were trained using a total computing power of more than 10^25 FLOPs are considered to carry ‘systemic risks’, which means that there is an obligation to assess and mitigate risks, report serious incidents, ensure cybersecurity and provide information on the energy consumption of these models. Much of those obligations are similar to obligations under the GDPR.
- Deployers that are bodies governed by public law or private operators providing public services, and operators providing high-risk systems shall perform a fundamental rights impact assessment (‘FRIA’). A FRIA can be done in conjunction with a data protection impact assessment (‘DPIA’) under the GDPR.
- The AI Act provides for huge fines, almost two times higher than under the GDPR. For the most severe violations of the prohibited applications, fines can go up to 7% of the global turnover or €35 million (GDPR: 4%/€20 million), and up to 1.5% for failing to cooperate with authorities and/or to provide accurate information. The AI Act will be supervised by national authorities, but there will also be a European AI Board and a European AI Office (within the European Commission) that will supervise general-purpose AI models, cooperate with the European Artificial Intelligence Board and be supported by a scientific panel of independent experts.
Who in your organization is going to deal with these new rules?
The AI Act is complicated legislation because it aims to regulate a ‘family’ of complex technologies with an extensive set of requirements and obligations that will apply to specific AI systems or AI systems used in specific use cases. In addition, the AI Act interacts with other impactful regulations such as the GDPR and copyright law, and has some consumer law characteristics (on the transparency side). This means that stakeholders from different functions within your organization should join forces, and that they should look at the AI Act collectively.
Considering that the exact content and timings of the AI Act will become more clear over the next few months, this is the perfect moment to start preparing and to start focusing on the ‘who side’. Who do I need in the team that deals with the AI Act? Who should be accountable and responsible for adherence to this act? Who is already dealing with other legislation regarding the digital technologies that I am using, and are there overlapping requirements and obligations?
In other words: start working now on your new Target Operating Model that incorporates the AI Act in your existing organization. Determine what capacities you already have, which new ones you’ll need and what training people require. Addressing technology, data and cyber legislation in general and the AI Act more in particular, requires a multi-disciplinary approach and not as a one off, but in a sustainable and clear governance. Subject Matter Experts with different backgrounds and from different functions benefit from such governance. By doing so you will ensure alignment to the AI Act with other (upcoming) rules and regulations (in the legislative pipeline) on e.g. data and cyber, preventing double work and other inefficiencies while increasing knowledge sharing.
Other no regret activities that you can start performing include: map the AI systems you are using and for what purpose, determine the risk level of the AI systems and determine your role in the supply chain. As for compliance, you can learn from the GDPR. Which processes, tools, guidance, and self-assessments can you ‘refurbish’ for AI Act compliance?
Next steps for the AI Act
As for the AI Act, what’s next is that technical meetings expected to last until at least mid-February to work out a number of details in line with the political agreement reached. We can expect a 95% final text around mid-February 2024. The EP and Council are expected to formally approve the final regulation ahead of the EP elections of 6-9 June 2024. The AI Act will then apply two years after it enters into force, so probably mid-2026, shortened to six months for the bans. Requirements for AI models, the conformity assessment bodies, and the governance chapter will start applying one year before.