On May 12, 2026, the Colorado legislature gave final passage to Senate Bill 26-189, a bill that repeals and replaces the Colorado Artificial Intelligence Act (SB 24-205) before its original obligations ever took effect. The bill now moves to Governor Jared Polis — who convened the working group that produced the framework and has been the single most important political force pushing for it. He is expected to sign.

If you have been building a Colorado AI Act compliance program, stop and re-baseline. The compliance regime you were preparing for — duty of care, risk management programs, annual impact assessments, algorithmic-discrimination disclosures to the Colorado Attorney General — is gone. In its place is a narrower disclosure-and-notice framework built around a defined term you will be living with for years: “covered ADMT.”

This article unpacks what passed, what changed, what survived, and what it means for your AI governance program — particularly if you operate across California, Illinois, Connecticut, Texas, and Colorado.


1. The Headline: What Just Happened

The Colorado AI Act (SB 24-205), enacted in May 2024, was the first comprehensive state AI law in the United States. It was modeled on the EU AI Act’s risk-based framework and imposed broad duties on both developers and deployers of “high-risk” AI systems used in consequential decisions about employment, credit, housing, healthcare, education, insurance, and essential government services.

It never took effect.

After two years, three legislative sessions, a failed 2025 special session, a federal lawsuit by Elon Musk’s xAI joined by the U.S. Department of Justice, and an April 27, 2026 federal court order temporarily blocking enforcement, the legislature passed SB 189 to repeal the original framework and replace it with something materially narrower.

The vote was bipartisan and lopsided: 34-1 in the Senate and 57-6 in the House. The Senate concurred in House amendments on May 12.

Here is what you need to internalize before anything else:

  • Effective date: January 1, 2027. The Colorado legislature does not reconvene until January 11, 2027, which means the bill cannot be amended again before it takes effect. The years of uncertainty about whether any Colorado AI law would actually go live are over.
  • Attorney General rulemaking deadline: January 1, 2027. AG Phil Weiser must complete rulemaking before the substantive provisions take effect. He has already indicated enforcement will not begin until rulemaking is complete.
  • Enforcement: Colorado AG only. No private right of action.
  • Right to cure: 60 days, sunsetting in 2030. This was Senate Majority Leader Robert Rodriguez’s red line — he refused to let the bill move without a sunset.
  • Colorado still has the most far-reaching legislatively enacted private-sector AI law of any state, even after this narrowing. Don’t let “business-friendly” framing fool you into deprioritizing it.

2. The Russian Nesting Doll: What Is “Covered ADMT”?

The entire bill turns on a single term: covered ADMT. The definition is built from a stack of other defined terms, each of which has its own carve-outs. Get this wrong in scoping and you will either over-comply (wasted budget) or under-comply (regulatory exposure).

Layer 1: Automated Decision-Making Technology (ADMT)

“A technology that processes personal data and uses computation to generate output, including predictions, recommendations, classifications, rankings, scores, or other information that is used to make, guide, or assist a decision, judgment, or determination concerning an individual.”

This is broad on purpose but with explicit carve-outs. The following are not ADMT under SB 189:

  • Anti-malware, firewalls, spam filtering, spell-checking
  • Databases, web hosting, web caching, spreadsheets that require human analysis and do not use machine learning, foundation models, or large language models
  • Tools used solely to summarize, organize, translate, draft, route, or present information for human review or administrative processing
  • Consumer-facing LLMs and natural-language chat tools, provided they are not contracted, advertised, marketed, configured, or intended for use in a consequential decision, and they are governed by an acceptable use policy prohibiting use of generated content in consequential decisions

That last carve-out — the chat exemption — is significant. A general-purpose ChatGPT-style assistant deployed for internal employee productivity does not fall in scope, if you have an AUP that explicitly prohibits using its output in consequential decisions and you do not market or configure it for that purpose.

Layer 2: “Materially Influence”

ADMT becomes “covered” only when it is used to materially influence a consequential decision. Materially influence means the ADMT output is a “non-de minimis factor” in the decision, “including by constraining, ranking, scoring, recommending, classifying, or otherwise meaningfully altering how a consequential decision is made.”

Incidental, trivial, or clerical uses are explicitly excluded. The Attorney General will issue rulemaking that provides presumptions and illustrative examples — that rulemaking is what GRC teams should be watching most closely between now and January 1, 2027.

Layer 3: “Consequential Decision”

A decision relating to a consumer’s access to, eligibility for, opportunity for, or compensation related to a covered domain.

Layer 4: “Covered Domain”

Seven domains:

  1. Education enrollment or an education opportunity
  2. Employment or an employment opportunity that creates or may create an employer-employee relationship
  3. The lease or purchase of residential real estate in Colorado
  4. Financial or lending services
  5. Insurance — underwriting, pricing, coverage, claims adjudication, or other determinations materially affecting access to benefits
  6. Healthcare services
  7. Essential government services and public benefits, including eligibility and renewal determinations

Notably absent: legal services (which were covered by the original Colorado AI Act), and advertising/marketing/search/content moderation (explicitly excluded). The ad-tech and recommender-system industries are outside the perimeter as enacted.

Layer 5: “Consumer”

The bill links to Colorado’s consumer-protection statutory framework. This linkage is important and limiting — it means many entities and data categories that are exempt under the Colorado Privacy Act flow through to SB 189’s coverage gaps.

Domain-Specific Exclusions

The bill also exempts:

  • HIPAA-covered entities to the extent of HIPAA-compliant activities
  • GLBA-regulated financial data to the extent of GLBA compliance
  • Insurance regulated under state insurance practice law
  • Fraud prevention activities (facial recognition for fraud prevention is now exempt — the working-group draft had carved it back in)
  • AML/CTF compliance
  • Activities required by federal law

The nesting works in the practitioner’s favor when scoping: an AI tool only sits in scope if you can answer yes to every layer. Build a scoping decision tree and document the answer for each system in your AI inventory.


3. Developer Obligations: Documentation Pass-Through

If you build, sell, license, or substantially modify a covered ADMT — your sole job under SB 189 is to give deployers the information they need to comply.

Beginning January 1, 2027, developers must provide deployers with documentation covering:

  • The intended uses and known harmful or inappropriate uses
  • The categories of personal data used to train the system, to the extent known
  • Known limitations, risks, and circumstances in which the system should not be used
  • Instructions for appropriate use, monitoring, and meaningful human review
  • Any other information necessary for the deployer to satisfy SB 189’s deployer obligations

Developers do not have to disclose proprietary source code, model weights, or other trade secrets.

Material updates, intentional modifications, and changes to intended use or risk mitigation must be communicated to deployers within a reasonable time. The bill allows developers to satisfy this through public release notes, provided they directly notify each deployer that the release notes were published. This is a significant practical concession.

Records retention: three years, including version identifiers, changelogs, and material-update documentation.

These obligations only attach where the ADMT was marketed, advertised, configured, contracted, sold, or licensed to be used in consequential decisions — or where the developer becomes aware of such use consistent with intended purposes. Pure research-only development is excluded, as is internal-only development never made available to another person for use in a consequential decision.

What’s gone compared to SB 24-205:

  • No mandated public summary of training data
  • No disclosure of “nature, source, and extent” of personal data processing
  • No obligation to explain how the system reaches outputs
  • No duty of reasonable care to avoid algorithmic discrimination
  • No annual impact-assessment-style reporting

4. Deployer Obligations: Notice, Disclosure, and Limited Rights

Deployers carry the bulk of the consumer-facing work. SB 189 imposes four categories of obligation.

4.1 Pre-Use Notice

Before using a covered ADMT to materially influence a consequential decision, the deployer must provide a clear and conspicuous notice that ADMT is or will be used. The notice must inform the consumer that ADMT is being used, provide instructions for how to obtain additional information, and be reasonably accessible at the point of consumer interaction.

The bill does not define “clear and conspicuous” — that will come in AG rulemaking. Companies operating in California already have CPPA ADMT pre-use notice templates that can be adapted as a starting point.

4.2 Post-Adverse-Outcome Disclosure (30 days)

If the covered ADMT materially influences a consequential decision that results in an adverse outcome for the consumer, within 30 days the deployer must provide:

  • A plain-language description of the decision
  • A plain-language description of the ADMT’s role in the decision
  • Instructions for requesting additional information
  • An explanation of the consumer’s rights under the bill and how to exercise them

An adverse outcome generally means a decision that denies access, eligibility, or opportunity, or that produces materially less favorable price, cost-sharing, compensation, or terms than similarly situated consumers receive.

4.3 Consumer Rights — Narrower Than They Look

Three rights attach to consumers experiencing an adverse outcome:

  1. Access to the personal data the deployer used to make the decision
  2. Correction of factually inaccurate personal data
  3. Meaningful human review and reconsideration, where applicable and commercially reasonable

Meaningful human review is defined to require a trained individual with authority to approve, modify, or override the decision, who does not default to the system’s output and who has access to information about the system’s intended use and limitations.

The rights to access and correct are linked to the Colorado Privacy Act, which exempts many of the entities and data categories most likely to be involved in consequential decisions — financial institutions covered by GLBA, HIPAA-covered entities, employment data, and others. The practical scope of these rights is therefore considerably smaller than a casual read suggests.

4.4 Recordkeeping — Three Years

Deployers must retain records sufficient to demonstrate compliance for three years from the date of the consequential decision.


5. Enforcement, Cure Period, and Liability

Enforcement

The Colorado Attorney General has sole enforcement authority. Violations are deemed deceptive trade practices under the Colorado Consumer Protection Act. No private right of action is created.

Cure Period

Before initiating action, the AG must provide a 60-day cure opportunity — but only where the AG determines a cure is possible, and not for knowing or repeated violations. This cure period sunsets on January 1, 2030. After 2030, the AG will likely be three years more sophisticated in its enforcement priors. Plan around the sunset, not the cure.

Liability — Where SB 189 Will Matter Most in Court

Anti-discrimination liability survives. A developer or deployer may be held liable in an action alleging unlawful discrimination under state anti-discrimination laws — including the Colorado Anti-Discrimination Act — arising from a consequential decision materially influenced by a covered ADMT.

Fault is allocated by relative responsibility. Joint and several liability does not apply except as already permitted under existing law. A developer’s liability is generally limited to instances where the deployer used the ADMT for an intended purpose.

Indemnification clauses for discrimination liability are void. If a contract between a developer and deployer purports to indemnify, defend, or hold harmless either party from liability for damages resulting from that party’s own acts or omissions related to using ADMT in consequential decisions in violation of the Colorado Anti-Discrimination Act or other Colorado anti-discrimination law, that provision is contrary to public policy and void.

This will break a lot of existing AI vendor contracts. If you have indemnification language drafted prior to May 2026 that attempts to shift discrimination liability between developer and deployer, your legal team needs to review it now.


6. The Multi-State Reality: Colorado Doesn’t Live in Isolation

The most important framing for SB 189 is this: even though it narrows obligations, Colorado still has the most far-reaching legislatively enacted deployer/private-sector AI law of any state. And the state regulatory landscape is dramatically more complex than it was when the original Colorado AI Act passed in 2024.

A non-exhaustive scan of what changed in the last 24 months:

  • California CPPA ADMT regulations were finalized September 23, 2025 and took effect January 1, 2026. Risk-assessment compliance is live; pre-use notices for significant decisions phase in April 1, 2027; first attestation is due April 1, 2028.
  • California FEHA amendments (effective October 1, 2025) elevate anti-bias testing as evidence in discrimination claims and impose extended ADS recordkeeping.
  • Illinois HB 3773 amends the Illinois Human Rights Act to treat discriminatory AI use in employment as a civil rights violation, effective January 1, 2026.
  • Connecticut SB 5 passed both chambers on May 1, 2026 and is awaiting the governor’s signature — an omnibus AI bill covering frontier models, companion chatbots, employment-related automated decision processes (AEDPs effective October 1, 2026, with substantive obligations October 1, 2027), synthetic content provenance, and a voluntary safe harbor program.
  • Texas TRAIGA (HB 149) took effect January 1, 2026 — though employment is explicitly carved out.
  • New York RAISE Act was finalized March 27, 2026 with a January 1, 2027 effective date, covering frontier-model developers with DFS oversight and AG enforcement.
  • NYC Local Law 144 continues to require bias audits for automated employment decision tools.
  • Over a dozen states have passed laws regulating consumer interaction with AI chatbots specifically.

Compounding this: in December 2025, President Trump issued Executive Order 14365 (“Ensuring a National Policy Framework for Artificial Intelligence”), which criticized the patchwork of state AI laws and outlined a path toward federal preemption. Whether preemption arrives — and how broadly it sweeps — is one of the largest open questions in AI compliance planning for 2026–2027.

For a regulated entity operating in multiple states, the practical compliance baseline is no longer “the most stringent state I operate in.” It’s a stacked obligation matrix where:

  • California pulls the risk-assessment-and-attestation burden
  • Connecticut pulls AEDP-specific employment obligations and safe-harbor opportunity
  • Illinois pulls employment and healthcare discrimination liability
  • Colorado pulls disclosure-and-notice obligations for the broadest set of covered domains
  • New York pulls frontier-model transparency
  • Texas pulls consumer-context AI governance (with employment carved out)
  • NYC pulls local employment bias-audit requirements

You cannot build a Colorado-only AI compliance program. You cannot build a “comply with the strictest” program. You have to build a program that pivots on system type × covered domain × state of consumer/employee.


7. Practical Action Items: What to Do Between Now and January 1, 2027

Q2–Q3 2026 — Inventory and Scoping

  1. Inventory every AI system in production, in pilot, and on procurement roadmaps.
  2. For each system, run the SB 189 scoping decision tree: Is it ADMT? Does it materially influence a consequential decision? In which covered domain? For Colorado consumers?
  3. Tag developer-vs-deployer status for each. Many organizations are both.
  4. Cross-walk each in-scope system against California CPPA ADMT, Illinois HB 3773, Connecticut SB 5, NYC Local Law 144, and any state-specific obligations for that domain.

Q3 2026 — Contract Review

  1. Pull every AI vendor contract executed in the last 36 months. Flag indemnification, defense, and hold-harmless clauses related to discrimination liability. SB 189 voids those for Colorado anti-discrimination claims.
  2. Open renegotiation conversations with developer-vendors on documentation packages before January.

Q4 2026 — Operationalization

  1. Draft pre-use notice templates — use California CPPA ADMT pre-use notice language as a starting baseline.
  2. Draft post-adverse-outcome disclosure templates (30-day window).
  3. Build the consumer rights intake flow: access requests, correction requests, meaningful human review requests.
  4. Build the three-year records retention pipeline.
  5. Watch AG Weiser’s rulemaking calendar. The rules will define “clear and conspicuous,” refine “materially influence,” and supply presumptions and illustrative examples.

Q1 2027 and Beyond — Steady State

  1. Plan around the 2030 cure-period sunset. Map your remediation capacity now so that by 2029 you are operating as if cure is not available.
  2. Reassess as federal preemption policy under EO 14365 evolves.
  3. Re-baseline annually as additional states enact frameworks — there is no plausible scenario where the patchwork shrinks in 2026–2027.

8. The Strategic Takeaway

The original Colorado AI Act was a duty-of-care regime: developers and deployers had to assess, document, and mitigate algorithmic discrimination risk, with a rebuttable presumption of reasonable care attaching to specific compliance activities. SB 189 is a transparency regime: developers and deployers have to notify, disclose, and respond — but the affirmative duty to assess and mitigate is gone.

That distinction matters for how you fund and staff your AI governance program. Under SB 24-205, the AI compliance budget tilted toward risk-assessment tooling, bias testing, and impact-assessment documentation. Under SB 189, the budget tilts toward consumer notice infrastructure, adverse-outcome workflows, recordkeeping pipelines, and vendor-documentation management.

If your program was sized for SB 24-205, you have more headroom than you thought. If your program was sized for “wait and see,” you have eight months and you should not waste them — because Colorado’s AG starts rulemaking immediately, Connecticut is about to ink SB 5, California’s ADMT pre-use notice phase-in lands April 1, 2027, and the federal preemption question may or may not arrive in time to bail anyone out.

The dream of a single, comprehensive, EU AI Act-style framework for the United States died with the Colorado AI Act. What replaces it is a domain-by-domain, state-by-state, role-by-role compliance matrix that AI governance teams will be managing for the foreseeable future.

SB 189 is the new baseline. Build accordingly.


This article is provided for informational purposes only and does not constitute legal advice. Engage qualified counsel for AI compliance program design, contract review, and enforcement-risk evaluation.