News 05 Dec. 2024
Partner Dr. Alexandra G. Maier Recognized Again in Lexology Client Choice Award 2025, Mining Experts Category 2025
more
Event 23 Oct. 2024
Counsel Mohannad El Murtadi Suleiman to Speak at the 2nd Annual Africa Arbitration Day in New York
Event 18 Aug. 2023
Partner Borzu Sabahi Speaks at FDI Moot Shenzhen
News 25 Jul. 2023
Partner Eric Gilioli Ranked in Top 10 Influential Energy & Natural Resources Lawyers in Kazakhstan in Business Today
News 09 Apr. 2024
Curtis Announces New Partners and Counsels Across Offices in Spring 2024
Client Alert 28 Dec. 2023
U.S. to Impose Secondary Sanctions on Non-U.S. Banks For Financing Russia’s Defense Industry
News 28 Aug. 2024
Curtis Recognized for Excellence in Arbitration in Chambers Latin America Guide 2025
Event 22 Aug. 2023
Partner Dr. Claudia Frutos-Peterson to Speak at Arbitration and ADR Commission of the ICC Mexico
Publications 19 Dec. 2024
Curtis Partner, John Balouziyeh, Authors New Guide to Investing in the Kingdom of Saudi Arabia and the GCC
News 08 Oct. 2024
Curtis Boosts London Finance and Corporate Capability with Appointment of Partner Christopher Harrison
News 24 Aug. 2023
Curtis Attorneys Quoted in CoinDesk on FTX Founder Sam Bankman-Fried’s Strategy Ahead of His Criminal Trial
Client Alert 10 Jul. 2024
EU Adopts New Restrictive Measures Against Belarus
Client Alert 26 Jun. 2024
The EU Adopts its 14th Sanctions Package Against Russia
news
Curtis Secures Early Victory for Colombia, Highlighting Sovereign Defense Excellence
publications
Client Alert 18 Dec. 2023
Click here to download the client alert with footnotes.
On December 8, the European Commission and European Parliament reached an agreement on the text for the Artificial Intelligence Act (“AI Act”). The AI Act will provide a comprehensive set of rules for businesses’ use of artificial intelligence, aiming to ensure that AI systems used in the EU are safe and respectful of human rights. The regulation will apply to any AI systems operating in the EU or made available on the EU market. While a few procedural steps remain before the AI Act’s ratification, a “provisional agreement” on the contents of the regulation was the last major hurdle. The provisional agreement’s final text has not been made public, and its precise details remain unknown. Ratification is expected in early 2024.
How Does the AI Act Work?
AI Act’s Scope
The provisional agreement adopts the following universal definition for AI to improve international alignment and provide direction for businesses:
An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment.
The provisional agreement clarifies that the AI Act will not apply in (1) areas outside the scope of EU law, (2) AI systems used exclusively for military or defense purposes, or (3) AI systems used solely for research and innovation. The first exclusion refers to AI that is created outside of the EU and not made available in the EU. The second and third exclusion signals that AI used in the military context or in research to further develop AI will likely be regulated in other ways, not by the AI Act.
Risk-Based Approach
The provisional agreement proposes a risk-based, tiered approach to regulating AI systems, categorizing systems into four different levels depending on potential harms stemming from their use: minimal risk, limited risk, high risk, and unacceptable risk. AI systems will be classified to a risk tier based on how much risk the system poses by itself or how much risk the system may pose in specific contexts.
The vast majority of AI systems will be considered minimal risk. These systems include AI-powered spam filters or recommendation systems, and will not be regulated by the AI Act. Companies using these systems may choose to instill their own codes of conduct regarding the use of minimal risk systems, but the AI Act will not require it.
Limited risk systems are systems that raise issues with transparency and require public disclosures. The provisional agreement requires transparency obligations from limited risk systems so the user knows they are interacting with AI-generated content. Limited risk AI includes deepfakes (an image or sound that has been digitally manipulated to replace a persons’ likeness with that of another, easily deceiving the average consumer of the media) and emotional recognition systems (technology that recognizes the emotions or characteristics of the user through automated means).
High risk systems are AI systems that may pose significant risks to the health and safety or fundamental rights of person because of the context in which they are used. These systems include:
Systems classified as high risk will have to comply with strict requirements. These requirements include the installation of risk-mitigation systems, documenting the systems’ activity and reports in detail, providing users with clear information on how the system works, and providing reports to the European Commission on serious incidents. Clear labeling of AI-generated content is also required for high risk systems. The labeling must make users aware they are interacting with AI or inform the user that a biometric categorization / emotion recognition system is being used.
Unacceptable risk systems are systems that present a clear threat to the fundamental rights of people, and are banned by the AI Act. These are systems that manipulate human behavior, systems that lead to unfavorable treatment of a class of people, and certain applications of predictive policing and biometric technologies. Examples include: emotion recognition in the workplace or educational institutions, systems that monitor on social behavior and “score” a person based on their actions, or systems that manipulate behavior to circumvent free will. The provisional agreement categorically bans these unacceptable risk systems.
General Purpose AI Systems
The provisional agreement adds further binding obligations for “general purpose AI,” or complex AI systems that serve a wide range of purposes. General purpose AI systems are used to process audio, video, textual and physical data. These AI systems are, by themselves, not considered to be high risk, but may be used as a component of high-risk AI systems. Popular examples of general purpose AI systems are OPT-175B (by Meta AI), GPT-3.5 (by Open AI), Bing Chat Enterprise, DALL•E 2, or ChatGPT. The provisional agreement has transparency obligations for general purpose AI systems to ensure that these models are deployed safely. Further, “high impact” general purpose AI systems, or systems trained with large amounts of data and advanced capabilities, will have heightened transparency requirements.
Penalties
The provisional agreement outlines fines for violations of the AI Act. These fines are set as a percentage of a company’s global annual turnover in the previous financial year or a predetermined amount, whichever figure is higher. The agreement sets out fines as follows: €35 million or 7% for violations of the banned AI applications, €15 million or 3% for violations of the AI Act’s obligations and €7,5 million or 1,5% for the supply of incorrect information.
System of Governance
To help enforce the AI Act, the provisional agreement creates an “AI Office” within the European Commission. The AI Office will oversee the production of the most advanced AI models, create standards within the AI industry, and enforce the provisions of the AI Act in all member states.
The provisional agreement also creates the “AI Board,” compromised of representatives from member states, to act as a coordination platform and an advisory board to the European Commission.
Lastly, the provisional agreement creates an “advisory forum” consisting of stakeholders in the AI industry. These stakeholders can include general industry representatives, as well as representatives from small/medium sized enterprises, start-ups, and academia. The advisory forum will advise the AI Board about advancements in the industry and provide technical expertise.
Next Steps
Efforts to finalize the text and submit the provisional agreement for formal adoption will continue, with hopes for ratification in early 2024. Once ratified, the following timeline will apply:
About Curtis
Curtis, Mallet-Prevost, Colt & Mosle LLP is a leading international law firm. Headquartered in New York, Curtis has 19 offices in the United States, Latin America, Europe, the Middle East and Asia. Curtis represents a wide range of clients, including multinational corporations and financial institutions, governments and state-owned companies, money managers, sovereign wealth funds, family-owned businesses, individuals and entrepreneurs.
For more information about Curtis, please visit www.curtis.com.
Attorney advertising. The material contained in this Client Alert is only a general review of the subjects covered and does not constitute legal advice. No legal or business decision should be based on its contents.
Please feel free to contact the persons listed on the right if you have any questions on this important development.
Artificial Intelligence
Elisa Botero
Partner
Jonathan J. Walsh
New York
+1 212 696 6000
client alert
Does U.S. Sanctions Law Prohibit Providing a Speech Platform to Sanctioned Persons?
The EU issues new FAQs clarifying the Best Efforts Obligation on EU Operators
We use cookies on our website to enhance your browsing experience, match your interests and assess our website performance. We do not share information with any third-party for marketing purposes. Please view our privacy policy to learn more about the use of cookies on our website. By continuing to browse our website, you consent to our use of cookies.