Client Alert 28 Jun. 2024

Colorado Sets the Bar for AI Compliance with New Law

Please download the full client alert here.

On May 17, 2024, Colorado became the first state to adopt general legislation regulating artificial intelligence (AI). The Colorado AI Act targets “high-risk” AI systems and seeks to prevent and remedy instances of “algorithmic discrimination.” The law takes effect on February 1, 2026.

Colorado’s legislation follows on the heels of Tennessee’s narrower AI law enacted earlier this year, The Ensuring Likeness Voice and Image Security Act (the “ELVIS” Act), which banned commercial use of AI-generated works using an individual’s voice without their consent.

Key Takeaways from the Colorado AI Act

The Colorado AI Act distinguishes between “developers” and “deployers” of AI technology, with separate sets of requirements and duties for each.

Who is a developer?

A developer is any person doing business in the state that develops, or intentionally and substantially modifies an artificial intelligence system, including a high-risk AI system. (Sec. 6-1-1601(7)).

Who is a deployer?

A deployer is any person doing business in the state that deploys a high-risk AI system. (Sec. 6-1-1601(6)).

What is algorithmic discrimination?

The Act defines algorithmic discrimination as any condition where the use of a high-risk AI system results in unlawful differential treatment or impact that disfavors an individual or group of individuals based on their actual or perceived age, color, disability, ethnicity, national origin, race, or religion, among other protected classes. (Sec. 6-1-1601(1)(a)).

What is a high-risk AI system?

A high-risk AI system is any artificial intelligence system that when deployed, makes, or is a substantial factor in making a consequential decision. (Sec. 6-1-1601(9(a)).

Excluded from this definition is AI that either (i) performs narrow procedural tasks or (ii) detects decision-making patterns or deviations from prior decision-making patterns and is not intended to replace or influence human assessment or review. The statute also excludes other technologies that use AI, such as data storage, cybersecurity, and spam filtering. (Sec. 6-1-1601(9(b)).

What is considered a substantial factor?

A substantial factor is a factor generated by an AI system that is used to assist in making, and is capable of altering the outcome of, a consequential decision. (Sec. 6-1-1601(11)).

What is a consequential decision?

According to Sec. 6-1-1601(3), a consequential decision is any decision that has a material, legal or similarly significant effect on the provision or denial to any consumer of, or the cost or terms of:

  • Education
  • Employment
  • Financial or lending services
  • Essential government services
  • Healthcare services
  • Housing
  • Insurance
  • Legal services

What are the developers’ obligations?

Among other things, developers are required to:

  • Use reasonable care to protect consumers from algorithmic discrimination.
  • Provide and periodically updated a public statement on their website describing their high-risk AI systems.
  • Provide disclosures and documentation to deployers regarding intended use, subject to certain exceptions such as for trade secrets.
  • Publicly disclose known or foreseeable risks of algorithmic discrimination and risk mitigation measures.
  • Provide a summary of data used to train the high-risk system.
  • Disclose to the state attorney general and known deployers any known or reasonably foreseeable risk of algorithmic discrimination within 90 days after discovery if (i) the developer discovers the system has been deployed and has caused or is reasonably likely to have caused algorithmic discrimination, or (ii) the developer receives a credible report from a deployer that the system has caused algorithmic discrimination.

What are the deployers’ obligations?

Among other things, deployers are required to:

  • Use reasonable care to protect consumers from algorithmic discrimination.
  • Maintain a risk management policy governing the use of high-risk AI systems.
  • Complete an annual impact assessment of high-risk AI systems, and within 90 days of any intentional and substantial modification of a high-risk AI system.
  • Post a public statement on their website describing the high-risk AI systems and their purpose and how the deployers manage any known or reasonably foreseeable risks of algorithmic discrimination.
  • Notify consumers when high-risk AI systems are used for consequential decisions before the decisions are made.
  • Disclose to the attorney general any algorithmic discrimination within 90 days after discovery.

Employers with less than 50 full-time employees do not need to maintain a risk management program, conduct an impact assessment, or post a public statement if they:

  • Refrain from using their own data to train the AI;
  • Use the AI system for the intended uses disclosed by the developer; and
  • Make publicly available any impact assessments provided to them by the developer that contains information substantially similar to the information required by deployers.

However, these employers are not exempt from other obligations under the Act.

What advantage may be gained from complying with the Act?

If a developer or deployer has complied with its respective obligations under the Act, there is a rebuttable presumption that the developer or deployer used reasonable care as required under the Act. (Sections 6-1-1702.1, 6-1-1703.1). Developers and deployers may contract a third party to complete their assessment and reporting obligations.

How will the law be enforced?

There is no private right of action under the Act. The Colorado State Attorney General has the exclusive authority to enforce these provisions. (Sec. 6-1-1606(1)).

Further, the State Attorney General has general rule making authority under the Act. Upon request, both developers and deployers must provide all required documentation to the Attorney General.

Conclusion

The Colorado AI Act, anchored by its risk-based framework, is a significant step in regulating AI. Other states are likely to follow suit with similar or broader legislation in the near future. Companies using high-risk AI systems should start preparing for compliance with these sorts of requirements well in advance of February 1, 2026.

Related resources

news

Fernando Tupa to Speak at 18th Annual Investment Treaty Arbitration Conference on Sovereign Wealth Fund Protection

Read

news

Curtis Lawyers Featured in Bloomberg Law Article, ‘FTC's Marriott Data Breach Order Echoes States' Right to Delete’

Read

news

Simon Batifort Speaks at ASIL Midyear Meeting in Chicago

Read