The regulatory landscape for artificial intelligence (AI) in the United States is rapidly evolving, with Colorado becoming a pioneer in consumer protection measures with the Colorado Act Concerning Consumer Protections in Interactions with Artificial Intelligence Systems (the “Colorado AI Act”). The first of its kind in the nation, the legislation aims to reshape the deployment and development of AI systems, setting a precedent for other jurisdictions. Set to take effect on February 1, 2026, the Colorado AI Act introduces a comprehensive framework to address the potential risks associated with AI systems, particularly those that make important decisions that affect consumers.
Scope and applicability
Colorado’s AI law is broad in scope, encompassing both developers and deployers of AI systems within the state. Developers refer to entities doing business in Colorado and engaged in the development or substantial modification of AI systems. Deployers are defined as entities operating in Colorado that deploy high-risk AI systems. Additionally, the scope of Colorado’s AI law extends to interactions with AI systems that have a substantial or similar legal effect on various aspects of consumers’ lives, including education, employment, financial services, government services, health care, housing, insurance, and legal services. Unlike some consumer privacy laws, Colorado’s AI law does not establish a minimum consumer threshold for its applicability, meaning that entities of any size engaged in covered activities are included. The law applies to interactions involving AI systems that have a substantial legal or similar effect on various aspects of consumers’ lives, including education, employment, financial services, government services, health care, housing, insurance, and legal services. The term “consumer” specifically refers to Colorado residents.
At the heart of Colorado’s AI law is the classification of “high-risk AI systems,” which includes AI systems involved in making important decisions in areas such as education, employment, finance, healthcare, housing, insurance, and legal services. These decisions are characterized by their significant impact on individuals’ rights, opportunities, and access to essential services. By targeting high-risk systems, the legislation aims to mitigate potential harms, such as algorithmic discrimination, that can result from automated decision-making processes.
Obligations of developers and deployers
Under Colorado’s AI Act, developers of high-risk AI systems are subject to several obligations designed to promote transparency, accountability, and prevent algorithmic discrimination. For example, developers must provide deployers with comprehensive documentation, including high-level summaries of the data used to train the system, information about uses and risks of algorithmic discrimination, methods for assessing and mitigating risks of algorithmic discrimination, and any information necessary for deployers to fulfill their obligations, such as conducting impact assessments. Developers are also required to publicly disclose statements summarizing the types of high-risk AI systems they have developed or substantially modified, as well as how they address known or foreseeable risks of algorithmic discrimination associated with those systems. These statements must be updated regularly to reflect any changes or developments. If there are known or reasonably foreseeable risks of algorithmic discrimination, developers must disclose this information to the Colorado Attorney General and known deployers within 90 days of discovering or receiving a credible report from a deployer that the high-risk AI system has caused or is likely to cause algorithmic discrimination.
Deployers, for their part, have several key obligations to ensure responsible use of AI systems and guard against algorithmic discrimination. For example, deployers must implement a comprehensive risk management policy and program to govern the use of high-risk AI systems. This includes conducting impact assessments to evaluate the potential risks of algorithmic discrimination associated with the deployment of these systems. In addition, deployers are required to notify consumers if a high-risk AI system makes a decision that affects them. This notification must include information about the purpose of the AI system, the decision made, and the consumer’s right to correct any errors in the personal data used by the system and to appeal adverse decisions. In addition, deployers must publicly disclose statements summarizing the types of high-risk systems they deploy, how they address the risks of algorithmic discrimination associated with those systems, and the nature, source, and extent of the information collected and used by the deployer. Deployers must also disclose to the Colorado Attorney General any instances of algorithmic discrimination discovered within 90 days of discovery. This requirement ensures that any discriminatory outcomes resulting from the deployment of high-risk AI systems are promptly reported and addressed.
Exemptions and application
While the Colorado AI Act imposes strict requirements on developers and deployers, it provides several exemptions for certain entities and scenarios.
The Colorado AI Act exempts HIPAA-covered entities when they make certain non-high-risk AI-generated health care recommendations that require a health care provider to implement the recommendation. This exemption recognizes the existing regulatory framework governing health data privacy and ensures alignment with HIPAA requirements.
Insurers subject to CO Section 10-3-1104.9 and related rules are also exempt from certain provisions of the CAIA. This exemption recognizes the unique regulatory landscape governing the insurance industry and the need to avoid duplicative or conflicting requirements.
Additionally, AI systems acquired by the federal government or federal agencies are exempt from CAIA requirements. This exemption recognizes the federal government’s authority to regulate AI systems under its jurisdiction and ensures consistency with federal regulations.
Certain banks and credit unions subject to substantially similar or more stringent guidelines or regulations applicable to the use of high-risk AI systems are exempt from certain provisions of the CAIA. This exemption recognizes existing regulatory oversight in the financial sector and is intended to avoid regulatory duplication.
Enforcement of Colorado’s AI Act is primarily the responsibility of the Colorado Attorney General’s Office. Violations of Colorado AI Act provisions are considered deceptive business practices for non-compliance, subject to civil penalties. These penalties can be up to $20,000 for each violation, with each violation assessed on a per-consumer or per-transaction basis. Colorado’s AI Act does not provide for a private right of action, meaning that enforcement actions can only be brought by the Colorado Attorney General. In addition, Colorado’s AI Act empowers the Attorney General’s Office to promulgate rules in a variety of areas, including documentation, notices, disclosures, impact assessments, and risk management policies and programs.
Colorado AI Law vs. EU AI Law
The Colorado AI Act and the EU AI Act share common goals of regulating AI to protect consumer interests, but they also have some differences.
Colorado’s AI law focuses primarily on interactions within the state of Colorado and applies to developers and deployers operating within its jurisdiction. In contrast, the EU AI law has a broader territorial scope, extending its reach to developers and deployers located outside the EU if their AI systems are available on the EU market or their results affect EU residents. This key difference reflects the EU’s global regulatory ambitions compared to the more localized scope of Colorado’s AI law.
While both laws recognize the risks associated with high-risk AI systems, they differ in their categorization criteria. Colorado’s AI law defines high-risk AI systems based on their potential to influence important decisions in areas such as education, employment, and healthcare. Conversely, the EU AI law includes additional high-risk categories such as biometrics, emotion recognition, law enforcement, and democratic processes. This broader classification under the EU AI law reflects its comprehensive approach to identifying and regulating AI risks.
Both laws impose obligations on developers and deployers, albeit with some variations. Colorado’s AI law requires developers to conduct due diligence to avoid algorithmic discrimination, accompanied by strict documentation and disclosure requirements. Deployers are required to implement risk management policies, conduct impact assessments, and ensure consumer rights, including the right to appeal adverse decisions. In contrast, the EU AI law places greater emphasis on risk management requirements for providers than for deployers. Additionally, while Colorado’s AI law emphasizes transparency and consumer rights, the EU AI law emphasizes explanations for decisions made by high-risk AI systems and requires human oversight, particularly in sensitive areas.
The enforcement mechanisms differ between the two laws. Colorado’s AI law grants exclusive enforcement authority to the Colorado Attorney General, with violations constituting deceptive marketing practices subject to civil penalties. In contrast, the EU AI law empowers national supervisory authorities to enforce its provisions, with significant penalties of up to €35 million or 7% of total global revenue for non-compliance. This divergence in enforcement mechanisms reflects the different regulatory frameworks and enforcement priorities of the respective jurisdictions.
Preparing for compliance
This groundbreaking legislation will go into effect on February 1, 2026. AI companies operating in Colorado will need to proactively assess their systems, improve transparency, and implement strong governance frameworks to comply with the new requirements. Focusing on managing potential risks associated with AI, particularly in high-impact areas, can help mitigate harms such as algorithmic discrimination. Staying informed and preparing for compliance will help companies meet the standards set by this groundbreaking regulation.