Written by Brian Sabey, Michael T. Batt, Jennifer L. Davis and Lindsay K. McManus from Hall Render Killian Heath & Lyman PC on January 24, 2025
The Colorado Artificial Intelligence Act (“AI Act”), enacted in 2024, becomes effective February 1, 2026. It has been called the first comprehensive AI law in the country, likened to the European Union AI Law, and predicted to be first tested in employment disputes.
In Part 1 of this article, we cover the basics of the AI Act and focus on how it may affect employers. Come back for Part 2 where we focus on how the new law may affect health care providers.
The Basics
In short, the AI Act requires developers and deployers who do business in Colorado to use reasonable care to avoid algorithmic discrimination in high-risk artificial intelligence systems affecting consumers who are Colorado residents. The AI Act applies to predictive AI systems which make decisions, not newer generative AI systems like ChatGPT which create content.
“Developers” are those who develop or substantially modify an AI system and “deployers” are those who use AI systems. “Algorithmic discrimination” is defined as the unlawful differential treatment or impact that disfavors an individual or group based on protected characteristics. Protected characteristics are actual or perceived age, color, disability, ethnicity, genetic information, limited English proficiency, national origin, race, religion, reproductive health, sex and veteran status.
An artificial intelligence system is considered “high risk” when it is used to make, or is a substantial factor in making, “consequential decisions,” meaning:
- Employment or employment opportunities
- Education enrollment or opportunities
- Financial or lending services
- Health care services
- Housing
- Insurance
- Legal services
The AI Act applies to developers and deployers who do business in Colorado, regardless of where the business is located or incorporated. The AI Act is triggered when an AI system impacts a “consumer” which is defined as a Colorado resident, regardless of where the Colorado resident is physically located.
-The Stick-
Failure to use “reasonable care” is considered an unfair trade practice under Colorado’s Consumer Protection Act. Violations carry penalties of up to $20,000 per violation and up to $50,000 per violation if committed against an elderly person.
The AI Act does not create a private cause of action. Enforcement is under the authority of the Attorney General (“AG”). However, individuals still have the right to bring discrimination claims under other existing state and federal laws.
-The Carrot-
How can a company demonstrate that it has used “reasonable care” in the deployment of a high-risk AI system? The law contains a rebuttable presumption, meaning that if certain practices are adopted, compliance with the law is presumed (unless the state AG demonstrates otherwise in a court of law). To benefit from this safe harbor, deployers must adopt a number of practices, including:
- Implementing a risk management policy and program governing the use of high-risk AI systems;
- Completing an annual impact assessment to identify and mitigate the risks of algorithmic discrimination;
- Notifying individuals when they are interacting with an AI system, when it has made an adverse decision to them, and how to appeal such decisions;
- Posting a website notice about the company’s use of AI systems; and
- Reporting to the Colorado AG the discovery of algorithmic discrimination caused by an AI system.
The Employment Context
In the employment context, employers will most often be considered deployers when they utilize an AI system to make or help make employment-related decisions, such as resume reviews, AI video interviews and other hiring and promotion decisions. (Employers may also be considered developers if they create or substantially modify an AI system, which imposes similar requirements.)
For example, one common use of AI by employers is to perform initial resume reviews to narrow a pool of applicants. This way, fewer resumes would proceed to the human review stage, thereby saving time and resources. The AI Act would be triggered in this scenario because (1) a determination as to whether an applicant moves on to the next stage would be considered a “consequential decision;” (2) affecting an “employment opportunity;” and (3) made by an AI system, which renders it a “high-risk AI system” subject to the AI Act.
To utilize AI for resume reviews or other employment decisions, an employer must use “reasonable care” to prevent algorithmic discrimination or risk sanctions for unfair trade practices. An employer can demonstrate reasonable care by first ensuring it has implemented an AI governance risk management policy and program (an example of this is the National Institute of Standards and Technology (“NIST”) AI Risk Management Framework). Second, by conducting an impact assessment of the AI system, addressing how it avoids/mitigates bias and unfairly disadvantaging candidates based on race, ethnicity, sex, limited English proficiency, etc. Next, the employer needs to notify job applicants that they are interacting with an AI system when the system has made an adverse decision to them, and how to appeal such decision. The individual notice requirements may be accomplished via job application forms and written communications with the applicants. The employer also must post a website notice about its use of AI. Last, the employer must notify the Colorado AG of the discovery of any algorithmic discrimination caused by the AI system, such as discovering the AI resume review system inadvertently screened out candidates based on gender, national origin or religion, for example.
Employers should weigh the benefits and burdens of using an AI system for resume reviews and other employment-related purposes as well as the protections provided by meeting the safe harbor. Despite the additional obligations, if an employer is faced with voluminous job applications to process, the benefit of AI assistance and protections of the safe harbor may be worth the investment in time and resources to comply with the AI Act.
Employers should note that while the AI Act references the Colorado Privacy Act (“CPA”), the CPA does not apply protections to job applicants and employees. The CPA is only applicable to businesses that process “personal data” of personal and household “consumers.” A business is required to notify those consumers of their statutory right to opt out of having their personal data processed by an AI system. This is favorable to employers because it means that neither the AI Act nor the CPA requires them to issue data processing opt-out notices to job applicants or employees.
Practical Takeaways
For Colorado hospitals and health care organizations (and other businesses that do business in Colorado), consider taking the following steps now:
- Identify any current or planned uses of high-risk AI systems;
- Establish a multi-disciplinary AI workgroup (or harness an existing technology, compliance or data governance committee) to create an AI Act work plan;
- Become familiar with the federal NIST AI Risk Management Framework for further guidance;
- Monitor any statutory amendments to the AI Act and/or regulations promulgated by the Colorado AG; and
- Engage outside resources as needed.