On January 22, 2024, the pre-final text of the European Union’s Artificial Intelligence Act (“EU AI Act”) – the world’s first comprehensive horizontal legal framework for the regulation of AI systems across the EU – leaked online as an 892-page table comparing different mandates of the EU AI Act, followed by a 258-page document setting out the consolidated text. The upcoming law has now been finalized and was endorsed by all 27 EU Member States on February 2.
What’s new?
The consolidated text is a political compromise of aspects that have been extensively negotiated. As announced in the official communication on December’s political agreement,1 the EU AI Act now contains a revised definition of “AI systems” that is aligned with the OECD definition,2 and provides details on the addressees of the EU AI Act, as well as the obligations for providers and deployers of AI systems. The pre-final text maintains the risk-based approach set out in previous drafts of the EU AI Act,3 which seeks to determine whether an AI system can lawfully be developed and used through a sliding scale of risk to fundamental rights,4 banning several types of AI systems entirely and establishing comprehensive compliance obligations for high-risk AI systems.
Scope of Application (Art. 3(1) EU AI Act)5
In order to distinguish AI from simpler software systems, Art. 3(1) EU AI Act defines an AI system as “a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.” This definition is, as expected, aligned with the definition introduced in the EU Parliament’s negotiating position, which – as noted above – itself is aligned with the definition set out by the OECD.6
The EU AI Act will establish obligations for providers, deployers, importers, distributors and product manufacturers of AI systems, with a link to the EU market. For example, the EU AI Act will apply to: (i) providers which place on the EU market or put into service AI systems, or place on the EU market general-purpose AI models (“GPAI models“); (ii) deployers of AI systems who have a place of establishment/are located in the EU, and to providers and deployers of AI systems in third countries if the output produced by an AI system is being used in the EU (Art. 2(1) EU AI Act). The expected exceptions are maintained in the consolidated pre-final agreement text, e.g., the EU AI Act will not apply to military AI systems or AI systems used for the sole purpose of scientific research and development.7 Another exception is made for free and open-source AI systems, i.e., the EU AI Act will not apply to open-source AI systems unless they are prohibited or classified as high-risk AI systems (Art. 2(5g) EU AI Act).
Member States will be able to maintain or introduce regulations more favorable to workers in terms of protecting their rights in respect of the use of AI systems by employers (Art. 2(5e) EU AI Act).
Prohibited AI Systems (Art. 5 EU AI Act)
The prohibitions set out in the consolidated text have been known since the political agreement was announced in December 2023,8 but are now provided in more detail. Such prohibitions include biometric identification systems (Art. 5(1ba) EU AI Act) and the strict limitations for the use of real-time remote biometric identification systems in publicly accessible spaces (Art. 5(1d) EU Act). The EU AI Act also prohibits AI systems that exploit any vulnerabilities of a person or specific group of persons due to their age, disability or specific social or economic situation (Art. 5(1)(b) of the EU AI Act).
High-Risk AI Systems (Art. 6 et seq. EU AI Act)
The consolidated text adopts an amended classification mechanism for high-risk AI systems, i.e., a combination of the classifications from the previous text versions9 that consist of an abstract definition of AI systems with reference to AI systems listed in Annex II and Annex III of the EU AI Act (Art. 6(1) and (2) EU AI Act) and a newly added exemption when there is “no significant risk of harm, to the health, safety or fundamental rights of natural persons, including by not materially influencing the outcome of decision making” (Art. 6(2a) EU AI Act). An AI system does not pose such risk if its intended use is limited to:
- performing narrow procedural tasks;
- making improvements to the results of previously completed human activities;
- detecting decision-making patterns without replacing human assessments; or
- mere preparatory tasks to a risk-assessment.
However, an AI system is always considered high-risk if it performs profiling of natural persons.
The consolidated text provides new details on obligations for providers who consider that their AI system is not high-risk despite being referred to in Annex III (listing high-risk systems). For example, assessment documentation and registration of the system in the EU database must be completed10 prior to placing the system on the EU market (Art. 6(2b) EU AI Act). The process and criteria for the addition of new use-cases for high-risk AI systems by the EU Commission is specified in Art. 7 EU AI Act.
The consolidated text further contains new details on compliance obligations for high-risk AI systems (Art. 8 et seq. EU AI Act), including written third-party agreements (Art. 28(2b) EU AI Act) and data governance (Art. 10 EU AI Act). The EU AI Office will have the authority to develop and recommend voluntary model contractual terms for third-party agreements. In terms of data governance, high-risk AI systems which make use of techniques involving the training of models with data will have to be developed on the basis of training, validation and testing data sets.
General-Purpose AI Models (Art. 52 et seq. EU AI Act)
The consolidated text confirms that the EU AI Act also applies to providers of GPAI models, containing an entire new section in the EU AI Act (Title VIIIA General Purpose AI Models, EU AI Act). A GPAI model is defined as “an AI model, including when trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable to competently perform a wide range of distinct tasks regardless of the way the model is placed on the EU market and that can be integrated into a variety of downstream systems or applications” (Art. 3(1)(44b) EU AI Act).
As noted above, the EU AI Act will not apply to any AI systems or models (including GPAI models and their output) where they are specifically developed and put into service for the sole purpose of scientific research and development (Art. 2(5a) EU AI Act).
The classification of GPAI models with systemic risk is addressed in Art. 52a EU AI Act. A GPAI model is considered to pose a systemic risk if it has high impact capabilities or is identified as such by the Commission. A GPAI model is presumed to have high impact capabilities11 if the amount of computational power, measured in floating point operations (FLOPs), is greater than 10^25 (Art. 52a(2) EU AI Act).
The relevant provider of GPAI is required to notify the Commission without delay, and in any event within two weeks, after those requirements are met or once it knows that they will be met. A list of AI models with systemic risk will be published and frequently updated by the Commission, without prejudice to the need to respect and protect intellectual property rights and confidential commercial information or business secrets (Art 52b(5) EU AI Act).
All providers of GPAI models are subject to certain obligations, such as: (i) making available technical documentation, including its training and testing process, or providing information to AI system providers who intend to use the GPAI model; (ii) cooperating with the Commission and national competent authorities; and (iii) respecting national copyright laws (Art. 52c EU AI Act). Providers of GPAI models with systemic risk will have to perform standardized model evaluations, assess and mitigate systemic risks, track and report incidents and ensure cybersecurity protection (Art. 52d(1a), (1b), (2) and (1c) EU AI Act).
To demonstrate compliance with the aforementioned obligations for providers of GPAI models with systemic risk, providers may rely on codes of practice,12 until harmonized standards at an EU-level are adopted (Art. 52d(2) EU AI Act). Until then, it will be possible for companies to develop their own compliance methods, although these require the Commission’s approval. The development process and content of codes of practice is further specified in Art. 52e EU AI Act, according to which the AI Office takes a key role in developing and monitoring the codes of practice.
Deep fakes
Deep fakes are now defined as “AI-generated or manipulated image, audio or video content that resembles existing persons, objects, places or other entities or events and would falsely appear to a person to be authentic or truthful” (Art. 3(44bI) EU AI Act).
The consolidated text sets out transparency obligations for providers and deployers of certain AI systems and GPAI models that are stricter than some of the previous drafts of the EU AI Act. These obligations include disclosure obligations for deployers of deep fakes subject to exceptions where the use is authorized by law to detect, prevent, investigate and prosecute criminal offenses. Where the content forms part of an evidently artistic work, the transparency obligations are limited to disclosure of the existence of such generated or manipulated content in a way that does not hamper the display or enjoyment of the work (Art. 52(3) EU AI Act).
Penalties (Art. 71 EU AI Act)
Member States are required to take into account the interests of SMEs, including start-ups, and their economic viability, when introducing penalty levels for violations of the EU AI Act (Art. 71(1) EU AI Act).
The maximum penalty for non-compliance with the prohibitions stated in Art. 5 of the EU AI Act is the higher of an administrative fine of up to 35 million Euros or 7% of worldwide annual turnover. Penalties for breaches of certain other provisions13 are subject to a maximum fine of 15 million Euros or 3% of worldwide annual turnover, whichever is higher. The maximum penalty for the provision of incorrect, incomplete or misleading information is 7.5 million Euros or 1%14 of worldwide annual turnover, whichever is the higher (Art. 71(5) EU AI Act). For SMEs and start-ups, the fines for all the above are subject to the same maximum percentages or amounts, but whichever is lower (Art. 71(5a) EU AI Act).
There is also a penalty regime for providers of GPAI models, set out in Art. 72a of the EU AI Act, which provides that providers of GPAI models may be subject to maximum fines of 3% of their annual worldwide turnover or 15 million Euros, whichever is higher. Fines will be imposed if the Commission finds that the provider intentionally or negligently infringes the relevant provisions of the EU AI Act, fails to comply with a request for documentation or information, or fails to provide access to the GPAI model for the purpose of conducting an evaluation. The right of natural and legal persons to report instances of non-compliance is now regulated in a new Chapter (Art. 68a et seq. EU AI Act), including a right to request clear and meaningful explanations from the deployer of an AI system (Art. 86c EU AI Act).
Implementation timeline (Art. 85EU AI Act)
The EU AI Act will enter into force on the 20th day after publication in the EU Official Journal and will be effective after 24 months (Art. 85(2) EU AI Act; and (Art. 85(1) EU AI Act), except for the following specific provisions listed in Art. 85(3)EU AI Act:
- The prohibitions in Title I and II (Art. 5) EU AI Act will apply six months after entry into force (Art. 85(3-a) EU AI Act);
- Codes of practice should be ready nine months after the EU AI Act enters into force (Art 85(3) EU AI Act);
- Penalties coming into force after 12 months (Art. 85(3a) EU AI Act);
- GPAI models are given 12 months or 24 months if already on the market (Art. 85(3a) EU AI Act); and
- Obligations for high-risk AI systems within the meaning of Art. 6(1) EU AI Act (i.e., AI systems intended to be used as a safety component of a product or AI systems listed in Annex II) will apply after 36 months (Art. 85(3b) EU AI Act).
Member States will have to: (i) designate at least one notifying authority and one market surveillance authority; and (ii) communicate to the Commission the identity of the competent authorities and the single point of contact. They also will have to make publicly available information on how competent authorities and single point of contact can be contacted from 12 months after entry into force (Art. 59 EU AI Act).
Each Member State is expected to establish at least one regulatory sandbox15 within 24 months of the EU AI Act being in force (Art. 53(1) EU AI Act).
Next Steps
The pre-final text was endorsed by all 27 EU Member States on 2 February. The torch is now passed to the European Parliament’s Internal Market and Civil Liberties Committees for adoption of the pre-final text, followed by a plenary vote provisionally scheduled for 10th – 11th April. This timeline for the adoption of the EU AI Act is tight but necessary in view of the upcoming European Parliament elections in June.
Timo Gaudszun and Emily Trittel contributed to the development of this publication.
1 European Council, Press Release, “Artificial intelligence act: Council and Parliament strike a deal on the first rules for AI in the world“, December 9, 2023.
2 See OECD Recommendation on Artificial Intelligence 2019.
3 See Dawn of the EU’s AI Act: political agreement reached on world’s first comprehensive horizontal AI regulation.
4 The stated purpose of the Act at Recital (1) is to “promote the uptake of human centric and trustworthy artificial intelligence while ensuring a high level of protection of health, safety, fundamental rights enshrined in the Charter, including democracy and rule of law and environmental protection, against harmful effects of artificial intelligence systems in the Union and to support innovation,”
5 The numbering of the Articles refers to the numbering in the leaked consolidated text of the EU AI Act and may change in the final text version.
6 See FN. 2.
7 See https://www.whitecase.com/insight-alert/dawn-eus-ai-act-political-agreement-reached-worlds-first-comprehensive-horizontal-ai?s=AI and Art. 2(3), (5a) EU AI Act.
8 See https://www.whitecase.com/insight-alert/dawn-eus-ai-act-political-agreement-reached-worlds-first-comprehensive-horizontal-ai?s=AI.
9 See Art. 6 of the EU Commission’s April 2021 Proposal (available here), Art. 6 of the EU Parliament’s negotiation position from June 2023 (available here) and the political agreement of December 2023 (see FN 1).
10 See Art. 51(1a) EU AI Act (registration obligation) and Art. 60 EU AI Act (EU database).
11 Defined in Art. 23(1)(44c) AI Act.
12 See https://www.whitecase.com/insight-alert/dawn-eus-ai-act-political-agreement-reached-worlds-first-comprehensive-horizontal-ai?s=AI and Recitals 60s and 60t of the EU AI Act.
13 See provisions laid down in Art. 5 EU AI Act and in Art. 71(4a-i) EU AI Act.
14 The European Council Press Release (see FN. 1) mentions 1.5%.
15 See https://www.whitecase.com/insight-alert/dawn-eus-ai-act-political-agreement-reached-worlds-first-comprehensive-horizontal-ai?s=AI.
White & Case means the international legal practice comprising White & Case LLP, a New York State registered limited liability partnership, White & Case LLP, a limited liability partnership incorporated under English law and all other affiliated partnerships, companies and entities.
This article is prepared for the general information of interested persons. It is not, and does not attempt to be, comprehensive in nature. Due to the general nature of its content, it should not be regarded as legal advice.
© 2024 White & Case LLP