News 05 Dec. 2024
Partner Dr. Alexandra G. Maier Recognized Again in Lexology Client Choice Award 2025, Mining Experts Category 2025
more
Event 23 Oct. 2024
Counsel Mohannad El Murtadi Suleiman to Speak at the 2nd Annual Africa Arbitration Day in New York
Event 18 Aug. 2023
Partner Borzu Sabahi Speaks at FDI Moot Shenzhen
News 25 Jul. 2023
Partner Eric Gilioli Ranked in Top 10 Influential Energy & Natural Resources Lawyers in Kazakhstan in Business Today
News 09 Apr. 2024
Curtis Announces New Partners and Counsels Across Offices in Spring 2024
Client Alert 28 Dec. 2023
U.S. to Impose Secondary Sanctions on Non-U.S. Banks For Financing Russia’s Defense Industry
News 28 Aug. 2024
Curtis Recognized for Excellence in Arbitration in Chambers Latin America Guide 2025
Event 22 Aug. 2023
Partner Dr. Claudia Frutos-Peterson to Speak at Arbitration and ADR Commission of the ICC Mexico
Publications 19 Dec. 2024
Curtis Partner, John Balouziyeh, Authors New Guide to Investing in the Kingdom of Saudi Arabia and the GCC
News 08 Oct. 2024
Curtis Boosts London Finance and Corporate Capability with Appointment of Partner Christopher Harrison
News 24 Aug. 2023
Curtis Attorneys Quoted in CoinDesk on FTX Founder Sam Bankman-Fried’s Strategy Ahead of His Criminal Trial
Client Alert 10 Jul. 2024
EU Adopts New Restrictive Measures Against Belarus
Client Alert 26 Jun. 2024
The EU Adopts its 14th Sanctions Package Against Russia
news
Curtis Secures Early Victory for Colombia, Highlighting Sovereign Defense Excellence
publications
Client Alert 14 Feb. 2024
Click here to download the client alert with footnotes.
On Friday, February 2nd, representatives from EU member states unanimously voted in favor of advancing the European Union’s Artificial Intelligence Act (“AI Act”) to its next stages. EU member states made changes to the “provisional agreement” – an earlier version of the text agreed on December 8, 2023 by the European Commission –, passing a new “compromise text” that moved to European Parliament committees for further approval.
On Tuesday, February 13th, the European Parliament’s Committee on Internal Market and Consumer Protection and the Committee on Civil Liberties, Justice and Home Affairs voted 71-8 in favor of advancing the AI Act to its final stage: a plenary vote of the AI Act by the EU Parliament (currently scheduled for April 10th-11th).
Below we highlight the notable differences between the “provisional agreement” and the new “compromise text.”
Revised Scope: National Security Excluded
The scope of the Act has been slightly altered. The compromise text makes clear that national security is excluded from the scope of the Act. This change aligns the compromise text “more closely with the respective language used in recently agreed legal acts” in the EU, such as the Cyber Resilience Act and the Data Act.
New Definition of an AI System That Sets It Apart from Plain Software
The compromise text now defines an AI system as “a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.” See Art. 3(1).
This definition has been modified to align more closely with the recognized definition used by international organizations working on artificial intelligence, like the Organization for Economic Co-operation and Development (OECD). The new definition is more tailored, and differentiates AI from other simpler software.
Prohibited AI Practices are Further Detailed
The compromise text adds detail to the list of AI practices that are always prohibited or prohibited in certain circumstances. Article 5 prohibits real-time biometric identification by law enforcement in public areas, with listed exceptions in Article 5(1)(d). The compromise text now notes safeguards to this provision, including monitoring, oversight measures, and limited reporting obligations at the EU level.
Additional prohibited uses of AI include untargeted scraping of facial images for creating or expanding facial recognition databases, emotion recognition (at the workplace or educational institutions), a limited prohibition of biometric categorization based on certain beliefs or characteristics, and a “limited and targeted” ban on individual predictive policing. The ban on predictive policing covers systems that “assess or predict the risk of a natural person to commit a criminal offense, based solely on the profiling of a natural person or on assessing their personality traits and characteristics.” See Art. 5(1)(da).
Fundamental Rights Impact Assessment
Article 29a(1) of the compromise text includes obligations for certain entities using AI to perform an assessment of the impact on fundamental rights that the use of the system may produce. This provision outlines the specific inquiries that the assessment must take, including a description of the deployer’s processes in which the high-risk AI system will be used in line with its intended purpose, the categories of natural persons and groups likely to be affected by its use in the specific context, and a description of the implementation of human oversight measures. See Art. 29a(1)(a)-(f).
Testing High-Risk AI Systems in Real World Conditions
The compromise text now includes provisions on testing high-risk AI systems in real world conditions, outside of AI regulatory sandboxes. See Art. 54a-54b. This means that testing high-risk AI systems in real world conditions will be possible, subject to a range of safeguards.
General Purpose AI Models
The compromise text includes new provisions concerning general purpose AI (“GPAI”), i.e., systems that have several possible uses, both intended and unintended by the developers.
Changes include new obligations for providers of GPAI models, which include keeping up-to-date and making available, upon request, technical documentation to the AI Office and national competent authorities. See Art. 52c(1)(a). A “provider” is defined as a person or entity that develops an AI system or a GPAI model and places them on the market or puts the system into service. See Art 3(2).
There are also new obligations for providers of GPAI models to provide information and documentation to downstream providers. See Art. 52c(1)(b). Downstream providers are providers that integrate an AI model that may have been provided by another entity into a product. See Art. 3(44g).
Providers of GPAI models will be required to adopt a policy to respect EU copyright law, as well as “make publicly available a sufficiently detailed summary” about how the GPAI was trained. See Art. 52c(1)(c).
In addition, providers of GPAI models “presenting systemic risks” will face additional requirements, which include “performing model evaluation, making risk assessments and taking risk mitigation measures, ensuring an adequate level of cybersecurity protection, and reporting serious incidents to the AI Office and national competent authorities. See Art. 52a. A GPAI model may be classified as a model with systemic risk if it has “high impact capabilities.” The designation of “high impact capabilities” can be given to a GPAI model either when it reaches a certain benchmark of computation ability, or if it is given such a designation by the Commission. See Art. 52a(1).
New Compliance Deadlines for AI Systems Already Deployed and Available
The compromise text also provides compliance deadlines for providers or deployers of AI systems that are already on the market or in service.
For public authorities that are acting as providers/deployers of high-risk AI systems, they will have 4 years from the entry into application to make their systems compliant. See Art. 83(2).
Further, every GPAI model already deployed before the enactment of the AI Act will have a total of 3 years after the date of enactment of the AI act to be brought into compliance. See Art. 83(3).
Raised Penalties for Non-Compliance
The compromise text raises the penalty for non-compliance with the provisions specifically concerning prohibited AI practices outlined in Article 5 from the higher of €35 million or 6.5% annual turnover, to the higher of €35 million or 7% of annual turnover. See Art. 71.
Further, there are new fines specifically for GPAI providers for non-compliance with certain enforcement measures, such as requests for information. See Art. 72a.
Entry into Application
The compromise text provides for a general 24-month window after ratification for the AI act to go into effect. See Art. 85. However, there a slightly shorter windows for certain elements to go into effect, such as a 6-month period for certain prohibited uses of AI and a 12-month period for provisions concerning “notifying authorities and notified bodies, governance, general purpose AI models, confidentiality and penalties.” See Art. 85. There is a slightly longer window of 36-months before provisions regarding high-risk AI systems listed in Annex II go into effect.
About Curtis
Curtis, Mallet-Prevost, Colt & Mosle LLP is a leading international law firm. Headquartered in New York, Curtis has 19 offices in the United States, Latin America, Europe, the Middle East and Asia. Curtis represents a wide range of clients, including multinational corporations and financial institutions, governments and state-owned companies, money managers, sovereign wealth funds, family-owned businesses, individuals and entrepreneurs.
For more information about Curtis, please visit www.curtis.com.
Attorney advertising. The material contained in this Client Alert is only a general review of the subjects covered and does not constitute legal advice. No legal or business decision should be based on its contents.
Please feel free to contact any of the persons listed on the right if you have any questions on this important development.
Artificial Intelligence
Elisa Botero
Partner
Jonathan J. Walsh
Joseph Muschitiello
Associate
New York
+1 212 696 6000
client alert
Does U.S. Sanctions Law Prohibit Providing a Speech Platform to Sanctioned Persons?
The EU issues new FAQs clarifying the Best Efforts Obligation on EU Operators
We use cookies on our website to enhance your browsing experience, match your interests and assess our website performance. We do not share information with any third-party for marketing purposes. Please view our privacy policy to learn more about the use of cookies on our website. By continuing to browse our website, you consent to our use of cookies.