On June 30, 2026 — 72 days from today — Colorado’s Senate Bill 24-205 becomes effective, making Colorado the first U.S. state to impose comprehensive, risk-based obligations on developers and deployers of high-risk artificial intelligence systems. The law is not a broad prohibition on AI use. It is a structured accountability framework that requires specific pre-deployment steps, ongoing monitoring, consumer-facing disclosures, and regulatory reporting when things go wrong.
Most organizations that will be subject to the law have not completed — or in many cases not started — the impact assessments, risk management programs, and disclosure infrastructure the law requires. At 72 days, that is a serious problem.
What the Colorado AI Act Actually Covers
High-Risk AI Systems
The law’s obligations apply specifically to “high-risk AI systems” — a defined term that is central to understanding scope. A high-risk AI system is one that, when deployed, makes or is a substantial factor in making a consequential decision affecting a Colorado resident in the following domains:
- Education and vocational training: Enrollment decisions, assessment, financial aid determinations
- Employment: Hiring, firing, promotion, compensation, performance evaluation
- Financial services: Credit, insurance, lending, underwriting
- Housing: Rental applications, purchase approvals, pricing
- Legal services: Access to legal representation, case assessment tools
- Healthcare: Diagnosis, treatment recommendations, prior authorization
If your organization uses an AI system that processes data about Colorado residents and produces outputs that influence decisions in any of these categories, you are likely subject to the law as a deployer. If your organization develops such systems and licenses them to third parties, you are subject to the law as a developer.
The law’s reach is not limited to companies headquartered in Colorado. Any developer or deployer whose systems affect Colorado residents is covered.
Developers vs. Deployers
The law distinguishes between two categories of regulated entities with different but overlapping obligations:
Developers build or substantially modify high-risk AI systems and make them available to deployers. Their primary obligations are to disclose the system’s capabilities, limitations, and potential for algorithmic discrimination to deployers — and to report discovered discrimination to the Attorney General within 90 days.
Deployers use high-risk AI systems to make consequential decisions about consumers. They bear the primary ongoing compliance burden: impact assessments, consumer disclosures, annual reviews, and complaint mechanisms.
This distinction matters practically: if your organization buys or licenses an AI system from a third-party vendor, you are the deployer and the full set of deployer obligations falls on you. You cannot delegate COPPA compliance to your SaaS vendor and you cannot delegate SB 24-205 compliance either.
The Five Core Compliance Requirements
1. Pre-Deployment Impact Assessment
Before deploying a high-risk AI system — and within 90 days of any intentional and substantial modification to a deployed system — deployers must complete a formal impact assessment documenting:
- The system’s purpose and intended use cases
- The deployment context and expected benefits
- Known or foreseeable risks of algorithmic discrimination and the mitigation measures in place
- The categories of data processed as inputs
- The outputs generated and how they are used in decision-making
- An overview of any data used to customize or fine-tune the system
- Transparency measures implemented for affected consumers
- A post-deployment monitoring plan and user safeguard mechanisms
This is not a checkbox exercise. The assessment must be substantive enough to demonstrate that the deployer actually analyzed the discrimination risk of the specific system in the specific context before putting it in front of consumers. Assessments that are generic, templated, or clearly disconnected from the actual system will not survive regulatory scrutiny.
2. Annual Algorithmic Discrimination Review
Deployers must conduct and document an annual review of each deployed high-risk system to determine whether the system is causing algorithmic discrimination in practice. This requires:
- Active monitoring of decision outputs for disparate impact across protected classes
- Analysis of any consumer complaints or appeals that suggest discriminatory outcomes
- Assessment of whether the deployment context has changed in ways that affect the system’s discrimination risk
- Documentation of review findings and any remediation actions taken
The annual review obligation means compliance is not a one-time deployment event. It is an ongoing governance function that must be resourced, scheduled, and documented.
3. Consumer Disclosure Requirements
Deployers must provide consumers with clear disclosure:
- That a high-risk AI system was used in making a consequential decision affecting them
- The purpose and nature of the AI system’s role in the decision
- The categories of data processed
- How to access a human review of the AI-influenced decision
- How to correct inaccurate personal data the system relied upon
The disclosure must be provided before or at the time the consequential decision is communicated to the consumer. It cannot be buried in a privacy policy or presented only in response to a specific request.
4. Complaint and Appeal Mechanism
Deployers must implement a mechanism by which consumers can:
- Submit complaints about decisions they believe were the result of algorithmic discrimination
- Request human review of AI-influenced consequential decisions
- Correct personal data used in the decision if that data was inaccurate
The human review requirement is operationally significant. If your organization is using AI to make credit, insurance, employment, or housing decisions at scale, you must have a staffed process capable of reviewing individual decisions that consumers contest.
5. Attorney General Reporting for Discovered Discrimination
Both developers and deployers who discover that a high-risk AI system has caused algorithmic discrimination must disclose that discovery to the Colorado Attorney General within 90 days.
This is one of the most consequential provisions in the law. Organizations that conduct thorough impact assessments and annual reviews — as required — and thereby discover discrimination are then affirmatively required to report it to state regulators. The law creates a tension that every compliance officer should think through carefully: the more diligent your monitoring, the more likely you are to discover issues that trigger mandatory regulatory disclosure.
The answer to that tension is not less rigorous monitoring. It is understanding that the AG reporting obligation, combined with a documented remediation response, is a significantly better regulatory position than being discovered through a consumer complaint or enforcement action.
Why Most Deployers Aren’t Ready
The Assessment Backlog Is Real
Impact assessments for AI systems are not a familiar compliance artifact for most organizations. Unlike a privacy impact assessment for a new data collection practice, an AI impact assessment requires technical knowledge of how the model was trained, what data it uses, how outputs are generated, and how those outputs translate into decisions. That assessment requires collaboration between legal, compliance, data science, and product teams — a cross-functional exercise that takes weeks to do properly.
Organizations deploying multiple high-risk AI systems need multiple assessments. An insurer using separate AI models for underwriting, claims processing, and fraud detection has three separate high-risk deployments, each requiring its own assessment.
Vendor Contracts Are Not Ready
Most enterprise AI procurement agreements do not include the disclosures that SB 24-205 requires developers to provide to deployers. Organizations that purchased AI systems from third-party vendors and have not updated their contracts to require system capability disclosures, discrimination risk information, and cooperation with impact assessments are missing a key piece of the compliance puzzle.
The Human Review Requirement Has Staffing Implications
Building a credible human review process for AI-influenced decisions is not purely a legal exercise. It requires trained reviewers who understand how to evaluate an AI system’s decision, access to the data the system used, and processes for reversing or modifying AI-generated outcomes. For organizations making thousands of consequential decisions per month, this is an operational buildout — not just a policy update.
Monitoring Infrastructure Is Underdeveloped
Most organizations deploying AI systems do not have monitoring infrastructure capable of detecting disparate impact in real time or even on a quarterly basis. The annual review obligation requires this capability. Building it requires data, tooling, and expertise that most non-technology-native organizations have not invested in.
The Colorado Attorney General’s Rulemaking
The Colorado AG’s office has been conducting rulemaking to implement SB 24-205, including public comment periods on draft rules. The final rules will clarify several provisions that are currently ambiguous — including the precise definition of “substantial modification,” the specific format and content requirements for impact assessments, and the procedures for AG reporting.
Compliance programs should monitor the AG rulemaking output closely. The rules will be the operational implementation guide for everything the statute sets out in general terms. Organizations that wait for the rules before beginning compliance preparation will not have enough time to implement.
Interaction With Other AI Governance Frameworks
Colorado’s AI Act does not exist in isolation. Organizations subject to it are also navigating:
EU AI Act: The EU’s risk-based AI regulation, with high-risk AI obligations taking effect under a phased timeline, imposes parallel impact assessment and human oversight requirements for organizations deploying AI in EU markets.
White House National Policy Framework (March 2026): The White House has recommended preempting state AI laws to create a unified federal standard. If federal preemption legislation passes, it could displace SB 24-205. Organizations should track federal developments but cannot assume preemption will occur before June 30.
NIST AI Risk Management Framework (AI RMF): NIST’s voluntary framework provides a practical methodology for AI risk assessment that maps reasonably well onto Colorado’s impact assessment requirements. Organizations that have already implemented the AI RMF have a strong foundation for SB 24-205 compliance.
Sector-specific regulators: In healthcare, the FDA’s AI/ML software as a medical device framework applies. In financial services, CFPB and banking regulators have issued AI guidance. Colorado’s requirements layer on top of — and do not displace — these sector-specific obligations.
Compliance Checklist: SB 24-205 Readiness
Scoping (complete immediately):
- Inventory all AI systems used in decision-making affecting Colorado residents
- Classify each system against the high-risk domain list: education, employment, financial services, housing, legal services, healthcare
- Identify whether your organization is a developer, deployer, or both for each system
- Map which systems are vendor-provided vs. internally developed
Impact assessments (begin now — 72 days is not enough time to start at Day 60):
- Assign cross-functional ownership (legal, compliance, data science, product) for each required assessment
- Obtain system capability disclosures and training data documentation from third-party AI vendors
- Complete formal impact assessments for all high-risk deployments before June 30
- Document assessment methodology and retain records
Consumer-facing infrastructure:
- Update consumer-facing notices for all high-risk AI applications to include SB 24-205 disclosures
- Build or designate human review processes for contested AI-influenced decisions
- Implement complaint intake mechanisms with documented SLAs for response
- Create data correction procedures for personal data used in AI decision-making
Annual review program:
- Establish a monitoring program capable of detecting disparate impact in AI decision outputs
- Schedule annual reviews for all high-risk deployments on a documented calendar
- Assign ownership for review execution and documentation
- Build AG reporting procedures for discovered algorithmic discrimination
Vendor management:
- Update third-party AI vendor contracts to require the disclosures SB 24-205 mandates
- Include AI Act compliance cooperation obligations in new and renewed vendor agreements
- Require vendors to notify you of system modifications that may trigger your 90-day reassessment obligation
Conclusion
Colorado’s SB 24-205 is the most operationally detailed AI accountability law yet enacted in the United States. It does not ban AI in high-stakes decisions. It creates a structured accountability framework that requires organizations to understand what their AI systems are doing, document that understanding, tell consumers about it, and report to regulators when they find problems.
That framework is achievable — but not in 72 days for an organization starting from scratch. Organizations that have been tracking the law since its 2024 enactment and began implementation planning in early 2026 are in a position to meet the June 30 deadline. Those that have been waiting for the AG’s final rules, or for federal preemption that may not come, are not.
The impact assessment is the starting point. Start there, today.
This article is provided for informational purposes only and does not constitute legal advice. Organizations should consult qualified legal counsel regarding their specific compliance obligations under Colorado SB 24-205 and applicable AI governance frameworks.



