When Colorado Governor Jared Polis signed Senate Bill 24-205 in May 2024, Colorado became the first state in the United States to enact comprehensive artificial intelligence legislation. The law — often called the Colorado AI Act — was modeled on the EU AI Act’s risk-based framework and imposed obligations on both developers and deployers of AI systems used in consequential decisions: employment, credit, healthcare, housing, education.

Two years later, before the law took effect, Colorado is moving to repeal it.

Senate Bill 189, advanced by Colorado lawmakers on May 4, 2026 with nine days left in the legislative session, would effectively gut SB 24-205 and replace it with a significantly narrower framework. If enacted, the new bill would delay the operative date for AI obligations to January 2027, eliminate the requirement that companies explain how their AI systems work, and replace the original law’s comprehensive risk assessment and disclosure regime with a basic consumer notice obligation.

Colorado is the first state to significantly walk back its own AI legislation. The reasons it is doing so, and what the replacement framework actually requires, matter for any organization that has been tracking state-level AI compliance.


What the Original 2024 Law Required

To understand what SB 189 removes, it is worth reviewing what SB 24-205 required.

Colorado’s 2024 AI Act applied to “developers” and “deployers” of “high-risk artificial intelligence systems” — systems that make or substantially influence consequential decisions in six categories: employment, education, financial services, essential government and public services, healthcare, and housing.

Obligations on developers included:

  • Conducting impact assessments for high-risk systems before deployment
  • Making impact assessments available to deployers
  • Disclosing known risks of algorithmic discrimination
  • Notifying deployers of material limitations on the system’s intended use

Obligations on deployers included:

  • Conducting their own impact assessments for high-risk systems they deploy
  • Providing consumers notice that they are subject to a high-risk AI decision
  • Providing consumers an explanation of the basis for a consequential decision — including an explanation of how the AI system operates that is specific enough for the consumer to understand why the decision was made
  • Maintaining a complaint process
  • Annual reporting to the Colorado Attorney General

The explainability requirement — requiring companies to disclose not just that AI was used, but how it worked and why it produced this particular outcome — was the most contested provision. Critics argued it was technically infeasible for complex machine learning systems and would create de facto prohibitions on certain AI deployment models.

The disclosure to the AG requirement also drew business opposition: detailed annual filings about AI systems, high-risk categories, and complaint outcomes created ongoing compliance overhead and potential competitive exposure.

The law was scheduled to take effect June 1, 2026.


What SB 189 Would Replace It With

Senate Bill 189 makes three structural changes to Colorado’s AI regulatory framework.

1. Delays the Effective Date to January 2027

SB 189 pushes the start date for AI obligations from June 2026 to January 2027. This gives businesses an additional seven months beyond the original deadline and reflects the pace of legislative negotiation — with SB 189 advancing in May 2026, organizations had roughly one month between its introduction and the June 2026 operative date of the original law.

2. Eliminates the Explainability Requirement

The most significant substantive change is the elimination of the requirement that companies explain how their AI systems work.

Under the original SB 24-205, deployers who made adverse consequential decisions using high-risk AI had to provide consumers with a specific, intelligible explanation of the decision — including an explanation of the system’s logic. This requirement tracked Article 22 of GDPR and similar explainability mandates in EU AI regulation.

SB 189 eliminates this. Under the proposed replacement framework, companies do not need to explain their AI systems’ decision logic to consumers. The bill’s sponsor described the new approach as “more of a notice bill” — focused on informing consumers that automated decision-making is happening, not on explaining how it works.

3. Replaces Risk Assessment Framework with Notice Obligations

SB 189 does not require deployers to conduct data protection or algorithmic impact assessments as a precondition to deploying high-risk AI.

Instead, the bill requires:

  • Disclosure that automated decision-making technology is being used — consumers must be told, in advance, that AI is involved in a consequential decision
  • More detailed information on request — if a consumer requests more detail within 30 days of an adverse consequential decision, the deployer must provide information about the automated system’s role in the decision
  • Appeals process disclosure — deployers must provide information about any appeals process, including the opportunity to correct inaccurate personal data and obtain “meaningful human review to the extent commercially reasonable”

The “to the extent commercially reasonable” qualification on human review is significant. The original law required human oversight of consequential decisions; the replacement leaves that requirement subject to a commercial reasonableness standard that will be defined through litigation or regulatory guidance, not through the statute itself.


Why Colorado Is Doing This

The retreat from SB 24-205 reflects dynamics that have been present since the law’s enactment.

Business opposition was immediate and sustained. Colorado’s business community, including the Colorado Chamber of Commerce, argued that the original law was technically infeasible, created competitive disadvantage relative to states without AI regulation, and imposed documentation and reporting burdens that would deter AI deployment rather than protect consumers.

Governor Polis, who signed SB 24-205, later expressed ambivalence about the law, stating publicly that he was concerned it was “not ready” and signaling openness to revision.

The Governor’s AI Policy Work Group recommended replacement. Rather than amending SB 24-205, the Work Group — convened specifically to address the implementation concerns — recommended a substantively new framework. SB 189 implements that recommendation.

Preemption concerns. Some Colorado policymakers argued that a state-level AI law that imposes obligations more stringent than what the federal government or other states require creates a compliance burden that pushes AI development activity out of Colorado. The “race to the bottom” on AI regulation is a political dynamic that SB 189 reflects, even if its proponents would not describe it in those terms.


The Pattern: State AI Legislation Under Pressure

Colorado is not alone. State AI legislation has consistently struggled to survive the gap between enactment and implementation.

Virginia enacted an AI regulation bill in 2024 that its governor vetoed.

Texas advanced several AI-related bills that were significantly weakened or did not advance before legislative sessions ended.

California — which would have created the largest state AI compliance obligation in the U.S. given its economy — passed SB 1047, a sweeping AI safety bill, only to see it vetoed by Governor Newsom in September 2024. Subsequent California AI legislation has been more targeted and less comprehensive.

The EU pattern is different but instructive. The EU AI Act is itself subject to the Digital Omnibus rollback debate — but the EU’s rollback is about implementation timing and administrative burden, not about eliminating substantive risk assessment or explainability requirements. Colorado’s rewrite goes further than the Omnibus in gutting the original framework.

The emerging picture: comprehensive state AI regulation — imposing risk assessments, explainability requirements, and annual regulatory filings — has not survived to implementation in any U.S. state. What remains, when legislation survives at all, is narrower: consumer notice, adverse decision disclosure, and rights to request human review.


What SB 189 Still Requires

Even under the narrowed framework, SB 189 imposes operative obligations that organizations deploying AI in Colorado need to address.

Consumer notice before AI-driven decisions. If your organization uses automated decision-making in employment, education, financial services, healthcare, or housing affecting Colorado residents, you will need to provide advance notice. Burying this in a terms of service disclosure is unlikely to satisfy the requirement.

30-day response window for adverse decision inquiries. When a consumer asks, within 30 days of an adverse AI-driven decision, for information about how the decision was made, you need to be able to provide a meaningful response. What “meaningful” requires in the absence of the original explainability standard will likely be tested by the AG’s office.

Appeals process. Your organization must have a documented appeals pathway that includes the opportunity to correct data errors and obtain human review where commercially reasonable. If you do not currently have a formal appeals process for AI-driven adverse decisions, you need one.

January 2027 operative date. Organizations that were preparing for June 2026 under the original timeline have additional runway — but the regulatory framework has also changed. Compliance programs built around SB 24-205’s assessment and explainability requirements will need to be rebuilt around SB 189’s notice-and-response model.


Implications for Multi-State AI Compliance Planning

The Colorado reversal is significant not because Colorado is the most important state AI regulator — it is not — but because it is the most prominent example of comprehensive AI legislation failing to survive contact with political and business reality.

Organizations building multi-state AI compliance programs should draw two conclusions from Colorado’s experience:

First, build compliance programs on the obligations that are actually enacted and operative, not on the most ambitious version of pending legislation. The instinct to build toward the highest conceivable standard can lead organizations to invest in compliance infrastructure — explainability tools, impact assessment frameworks, AG reporting processes — that the regulatory framework ultimately does not require.

Second, notice and human review are becoming the baseline. Even as risk assessment and explainability requirements have been rolled back, the baseline obligations — consumer notice, adverse decision disclosure, appeals process, human review on request — have survived. These are the obligations to build around.

For context on where other states stand, see our coverage of 20 states with active comprehensive consumer privacy laws and the EU’s parallel debate on AI Act rollbacks through the Digital Omnibus package.


Sources: Colorado Public Radio (Colorado’s AI compromise would drop requirement that companies explain how their technology works, May 4, 2026); Colorado Sun (Colorado AI law change bill introduced, May 2026); Colorado Newsline (New bill would narrow scope of Colorado’s landmark 2024 AI law); Colorado Politics (Colorado lawmakers advance rewrite of 2024 law to regulate artificial intelligence); Troutman Privacy (Proposed State AI Law Update, May 4, 2026); Colorado General Assembly SB24-205; Colorado Chamber of Commerce. This article is provided for informational purposes only and does not constitute legal advice.