Client Alert 18 Dec. 2023

EU Policymakers Agree on Provisions of the Artificial Intelligence Act

Click here to download the client alert with footnotes.

On December 8, the European Commission and European Parliament reached an agreement on the text for the Artificial Intelligence Act (“AI Act”). The AI Act will provide a comprehensive set of rules for businesses’ use of artificial intelligence, aiming to ensure that AI systems used in the EU are safe and respectful of human rights. The regulation will apply to any AI systems operating in the EU or made available on the EU market. While a few procedural steps remain before the AI Act’s ratification, a “provisional agreement” on the contents of the regulation was the last major hurdle. The provisional agreement’s final text has not been made public, and its precise details remain unknown. Ratification is expected in early 2024.

How Does the AI Act Work?

AI Act’s Scope

The provisional agreement adopts the following universal definition for AI to improve international alignment and provide direction for businesses:

An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment.

The provisional agreement clarifies that the AI Act will not apply in (1) areas outside the scope of EU law, (2) AI systems used exclusively for military or defense purposes, or (3) AI systems used solely for research and innovation. The first exclusion refers to AI that is created outside of the EU and not made available in the EU. The second and third exclusion signals that AI used in the military context or in research to further develop AI will likely be regulated in other ways, not by the AI Act.

Risk-Based Approach

The provisional agreement proposes a risk-based, tiered approach to regulating AI systems, categorizing systems into four different levels depending on potential harms stemming from their use: minimal risk, limited risk, high risk, and unacceptable risk. AI systems will be classified to a risk tier based on how much risk the system poses by itself or how much risk the system may pose in specific contexts.

The vast majority of AI systems will be considered minimal risk. These systems include AI-powered spam filters or recommendation systems, and will not be regulated by the AI Act. Companies using these systems may choose to instill their own codes of conduct regarding the use of minimal risk systems, but the AI Act will not require it.

Limited risk systems are systems that raise issues with transparency and require public disclosures. The provisional agreement requires transparency obligations from limited risk systems so the user knows they are interacting with AI-generated content. Limited risk AI includes deepfakes (an image or sound that has been digitally manipulated to replace a persons’ likeness with that of another, easily deceiving the average consumer of the media) and emotional recognition systems (technology that recognizes the emotions or characteristics of the user through automated means).

High risk systems are AI systems that may pose significant risks to the health and safety or fundamental rights of person because of the context in which they are used. These systems include:

  • biometric identification, categorization and emotion recognition systems if they are used for “real time” identification of a person;
  • systems used in the management and operation of critical infrastructure;
  • medical devices;
  • technology used to assess customers' lending risk or insurance needs;
  • automated methods used for recruiting people; and
  • certain systems used by law enforcement and border control.

Systems classified as high risk will have to comply with strict requirements. These requirements include the installation of risk-mitigation systems, documenting the systems’ activity and reports in detail, providing users with clear information on how the system works, and providing reports to the European Commission on serious incidents. Clear labeling of AI-generated content is also required for high risk systems. The labeling must make users aware they are interacting with AI or inform the user that a biometric categorization / emotion recognition system is being used.

Unacceptable risk systems are systems that present a clear threat to the fundamental rights of people, and are banned by the AI Act. These are systems that manipulate human behavior, systems that lead to unfavorable treatment of a class of people, and certain applications of predictive policing and biometric technologies. Examples include: emotion recognition in the workplace or educational institutions, systems that monitor on social behavior and “score” a person based on their actions, or systems that manipulate behavior to circumvent free will. The provisional agreement categorically bans these unacceptable risk systems.

General Purpose AI Systems

The provisional agreement adds further binding obligations for “general purpose AI,” or complex AI systems that serve a wide range of purposes. General purpose AI systems are used to process audio, video, textual and physical data. These AI systems are, by themselves, not considered to be high risk, but may be used as a component of high-risk AI systems. Popular examples of general purpose AI systems are OPT-175B (by Meta AI), GPT-3.5 (by Open AI), Bing Chat Enterprise, DALL•E 2, or ChatGPT. The provisional agreement has transparency obligations for general purpose AI systems to ensure that these models are deployed safely. Further, “high impact” general purpose AI systems, or systems trained with large amounts of data and advanced capabilities, will have heightened transparency requirements.

Penalties

The provisional agreement outlines fines for violations of the AI Act. These fines are set as a percentage of a company’s global annual turnover in the previous financial year or a predetermined amount, whichever figure is higher. The agreement sets out fines as follows: €35 million or 7% for violations of the banned AI applications, €15 million or 3% for violations of the AI Act’s obligations and €7,5 million or 1,5% for the supply of incorrect information.

System of Governance

To help enforce the AI Act, the provisional agreement creates an “AI Office” within the European Commission. The AI Office will oversee the production of the most advanced AI models, create standards within the AI industry, and enforce the provisions of the AI Act in all member states.

The provisional agreement also creates the “AI Board,” compromised of representatives from member states, to act as a coordination platform and an advisory board to the European Commission.

Lastly, the provisional agreement creates an “advisory forum” consisting of stakeholders in the AI industry. These stakeholders can include general industry representatives, as well as representatives from small/medium sized enterprises, start-ups, and academia. The advisory forum will advise the AI Board about advancements in the industry and provide technical expertise.

Next Steps

Efforts to finalize the text and submit the provisional agreement for formal adoption will continue, with hopes for ratification in early 2024. Once ratified, the following timeline will apply:

  • The AI Act will enter force 20 days after publication;
  • Prohibitions in the AI Act will apply 6 months after publication;
  • Rules on general purpose AI systems, high-risk AI systems, and EU governance of AI apply 12 months after publication;
  • And most other provisions apply 24 months after publication.

About Curtis

Curtis, Mallet-Prevost, Colt & Mosle LLP is a leading international law firm. Headquartered in New York, Curtis has 19 offices in the United States, Latin America, Europe, the Middle East and Asia. Curtis represents a wide range of clients, including multinational corporations and financial institutions, governments and state-owned companies, money managers, sovereign wealth funds, family-owned businesses, individuals and entrepreneurs.

For more information about Curtis, please visit www.curtis.com.

Attorney advertising. The material contained in this Client Alert is only a general review of the subjects covered and does not constitute legal advice. No legal or business decision should be based on its contents.

Please feel free to contact the persons listed on the right if you have any questions on this important development.

Related resources

news

Fernando Tupa to Speak at 18th Annual Investment Treaty Arbitration Conference on Sovereign Wealth Fund Protection

Read

news

Curtis Lawyers Featured in Bloomberg Law Article, ‘FTC's Marriott Data Breach Order Echoes States' Right to Delete’

Read

news

Simon Batifort Speaks at ASIL Midyear Meeting in Chicago

Read