Colorado Becomes First State to Enact Landmark Artificial Intelligence (AI) Bill

Category: Federal & State Compliance

Written by Ashley L. Felton and Katarina D. Stockton from Michael Best & Friedrich LLP on May 23, 2024

The Colorado Legislature passed a landmark AI bill establishing standards and requirements for those who develop or deploy certain artificial intelligence (AI) systems. Specifically, this new law requires businesses to take steps to prevent potential bias in AI products used for consequential decision making, particularly decisions related to employment, fair housing, banking/lending, healthcare, and education. The bill (SB24-205), titled Consumer Protections for Artificial Intelligence, is effective starting February 1, 2026.

Important Definitions

  • “Algorithmic discrimination” means any condition where the use of an AI system results in an unlawful differential treatment or impact disfavoring a protected group on actual or perceived age, color, disability, ethnicity, genetic information, limited proficiency in the English language, national origin, race, religion, reproductive health, sex, veteran status, or other classification protected under Colorado or federal law.
  • A “consumer” is any Colorado resident. As a result, all Colorado employees or applicants fall within this definition.
  • A “deployer” means a person doing business in Colorado that deploys the high-risk artificial intelligence system. With few exceptions, most Colorado employers fall within the definition of “deployer” and must abide by the new law’s requirements of using reasonable care to avoid algorithmic discrimination starting February 1, 2026.
  • A “high-risk AI system” means any AI systems that makes or is a substantial factor in making a consequential decision.

Requirements and Penalties

The law requires deployers of a high-risk AI system to use reasonable care to avoid algorithmic discrimination. Additionally, deployers of high-risk AI systems must disclose to the public it is being used, unless it would be obvious to a reasonable person that they are interacting with an AI system. Violation of the law constitutes an unfair trade practice under Colorado law, which may result in a civil penalty of up to $20,000 for each violation.

Reasonable Care

A deployer has a rebuttable presumption that it used reasonable care to protect consumers against unwanted bias if it does the following:

  1. implements a risk management policy and program for the high-risk system;
  2. completes an impact assessment of the high-risk system;
  3. annually reviews the deployment of each high-risk system deployed to ensure that the high-risk system is not causing algorithmic discrimination;
  4. notifies a consumer of specified items if the high-risk system makes a consequential decision concerning a consumer;
  5. provides a consumer with an opportunity to correct any incorrect personal data that a high-risk system processed in making a consequential decision;
  6. provides a consumer with an opportunity to appeal, via human review if technically feasible, an adverse consequential decision concerning the consumer arising from the deployment of a high-risk system;
  7. makes a publicly available statement summarizing the types of high-risk systems that it currently deploys, how the deployer manages any known or reasonably foreseeable risks of algorithmic discrimination that may arise from deployment of each of these high-risk systems, and the nature, source, and extent of the information collected and used by the deployer; and
  8. discloses to the attorney general the discovery of algorithmic discrimination, within 90 days after the discovery, that the high-risk system has caused or is reasonably likely to have caused.

Other States and Jurisdictions Seek to Follow

Although Colorado is now at the forefront of such AI regulation in the United States, many other state legislators are looking to pass similar legislation. For example, an AI bill currently pending in the California legislature (AB-2930) would bar businesses from using AI software to discriminate against job applicants and employees and require notifications that an automated decision tool is being used to make a consequential decision. Additionally, an AI bill in Illinois (HB 5116) would require a deployer to annually perform an impact assessment for any automated decision tool the deployer uses and requires persons be notified at or before the time an automated decision tool is used to make a consequential decision about them.

Additionally, an AI substitute bill currently pending in Connecticut (2024 CT S 2) seeks to protect the public from harmful unintended consequences of AI and train the workforce to use AI.

Moreover, at the federal level, the US House’s pending bill (HR7621), titled “No Robot Bosses Act,” would add protections for job applicants and employees related to automated decision systems and would require employers to disclose when and how these systems are being used across all states.

Action Items

  • Colorado businesses need to ensure their high-risk AI systems are ready for February 1, 2026 by proactively inventorying AI being used and formulating risk management policies and internal controls to track the efficacy of AI tools and monitor any unwanted bias.
  • Because algorithmic discrimination is a top priority for many states, businesses in all states should proactively ensure their AI systems are not resulting in algorithmic discrimination and start preparing internal risk management policies to carefully track decision-making by use of AI.
  • All employers should explore ways to disclose and communicate their use of AI systems to candidates and the public since transparency is a common theme across most AI legislation being introduced at the federal and state level. This will require a thorough understanding of what AI is being used by the employer and what is proactively being done to protect against bias.