Analysis/Colorado SB24-205: The Most Comprehensive State AI Law Yet
State PolicyColorado

Colorado SB24-205: The Most Comprehensive State AI Law Yet

What the landmark consumer protection bill means for AI developers and deployers

By The AI Lobby2025-03-158 min read
✨
AI Overview

Colorado SB 205 is the first state law to require algorithmic impact assessments for all β€˜high-risk’ AI systems. It passed despite $2.1M in industry lobbying against it.

Colorado's SB24-205 sets the standard for state-level AI regulation, requiring algorithmic discrimination impact assessments and consumer disclosures for high-risk AI systems.

The Most Comprehensive State AI Law in America

On May 17, 2024, Colorado Governor Jared Polis signed SB24-205 into law, making Colorado the first state to enact a comprehensive framework for regulating artificial intelligence systems. The bill, officially titled the Colorado Artificial Intelligence Act, was sponsored by Sen. Robert Rodriguez and Rep. Brianna Titone, and it establishes sweeping requirements for developers and deployers of "high-risk" AI systems β€” particularly those that make consequential decisions about consumers in areas like employment, housing, insurance, education, and lending.

With an effective date of February 1, 2026 β€” now just months away β€” the law is forcing companies across the country to grapple with compliance obligations that go far beyond anything previously required at the state level. For many in the AI industry, Colorado's law represents either a responsible model for other states to follow or a regulatory overreach that could stifle innovation. Either way, it's the law that every AI company's legal team is reading right now.

What the Law Actually Requires

At its core, SB24-205 is designed to prevent algorithmic discrimination β€” the use of AI systems that produce outcomes that unfairly disadvantage individuals based on protected characteristics like race, gender, age, disability, or religion. The law creates obligations for two categories of entities:

  • Developers β€” companies that design, code, and substantially modify AI systems
  • Deployers β€” businesses and organizations that use AI systems to make consequential decisions about consumers

For developers, the law requires:

  • Making available a general statement describing the types of high-risk AI systems the developer has developed or intentionally modified, along with documentation about known limitations, intended benefits, and the types of data used in training
  • Providing deployers with information necessary to complete an impact assessment, including known risks of algorithmic discrimination
  • Publishing on their website a statement summarizing the types of high-risk AI systems developed and how they manage risks of algorithmic discrimination

For deployers, the requirements are even more extensive:

  • Implementing a risk management policy and program governing the use of high-risk AI systems
  • Completing an annual impact assessment for each high-risk AI system, documenting the purpose, intended use cases, data inputs, known limitations, and any measures taken to mitigate discrimination risks
  • Providing consumers with notice that an AI system is being used to make a consequential decision about them
  • Providing consumers with a statement explaining the purpose and factors considered by the AI system
  • Offering consumers the opportunity to correct inaccurate data and appeal adverse decisions
  • Notifying the Colorado Attorney General within 90 days of discovering that a high-risk AI system has caused algorithmic discrimination

What Counts as "High-Risk"?

The law defines a high-risk AI system as any artificial intelligence system that, when deployed, makes or is a substantial factor in making a consequential decision about a consumer. Consequential decisions include those related to:

  • Education enrollment or opportunities
  • Employment or employment-related decisions (hiring, promotion, termination, compensation)
  • Financial or lending services
  • Essential government services
  • Healthcare services or coverage
  • Housing
  • Insurance
  • Legal services

This is a deliberately broad definition that captures the AI systems most likely to have material impacts on people's lives. A chatbot that recommends restaurants wouldn't qualify, but a hiring algorithm, insurance underwriting model, or tenant screening tool almost certainly would.

How Colorado Compares to the EU AI Act

Colorado's approach draws heavily from the EU AI Act, which was finalized in March 2024. Both frameworks use a risk-based tier system, focusing the heaviest requirements on AI systems that make high-stakes decisions about people. However, there are important differences:

  • The EU AI Act creates four risk tiers (unacceptable, high, limited, minimal), while Colorado's law focuses primarily on a single "high-risk" category
  • The EU Act bans certain AI uses entirely (social scoring, real-time biometric surveillance in public spaces), while Colorado's law does not prohibit any specific use cases
  • Colorado's law is more focused on discrimination specifically, while the EU Act addresses a broader range of safety and rights concerns
  • The EU Act creates a new regulatory body (the European AI Office), while Colorado relies on its existing Attorney General's office for enforcement
  • Colorado's affirmative defense provision β€” allowing companies that demonstrate reasonable compliance efforts to avoid liability β€” has no direct EU equivalent

Industry observers have noted that a company compliant with the EU AI Act would likely meet most of Colorado's requirements, but the reverse is not necessarily true. Colorado's narrower focus on discrimination means it doesn't address the full range of AI risks covered by the EU framework.

Who Supported and Opposed It

The path to SB24-205's passage was contentious. The bill received strong support from civil rights organizations, consumer advocacy groups, and some academic researchers who argued that AI systems were already making discriminatory decisions at scale without any accountability mechanism.

Opposition came primarily from the tech industry, which mounted an aggressive lobbying campaign against the bill. The Colorado Technology Association, the Software & Information Industry Association (SIIA), and several major tech companies argued that the bill's requirements were vague, overly burdensome, and would discourage AI companies from doing business in Colorado.

Key industry objections included:

  • The definition of "high-risk" was too broad and could capture routine software applications
  • The impact assessment requirements were modeled on the EU AI Act but lacked the EU's institutional support and guidance
  • Small businesses would face disproportionate compliance costs
  • The bill could create a chilling effect on AI development in Colorado

Governor Polis himself expressed reservations even as he signed the bill, noting in his signing statement that he was "concerned about the impact this law may have on an AI industry that is still in its nascent stages" and calling on the legislature to revisit the law before its effective date. Despite these concerns, he signed it, citing the importance of protecting consumers from algorithmic discrimination.

What Companies Need to Do Now

With the February 2026 effective date rapidly approaching, companies that develop or deploy AI systems making consequential decisions about Colorado consumers need to take several steps:

  • Inventory AI systems β€” Identify which systems qualify as "high-risk" under the law's definitions
  • Conduct impact assessments β€” Begin the annual assessment process for each high-risk system, documenting purpose, data inputs, discrimination risks, and mitigation measures
  • Build consumer notice mechanisms β€” Create processes for notifying consumers about AI-driven decisions and offering them appeal rights
  • Establish data correction procedures β€” Allow consumers to identify and correct inaccurate personal data used by AI systems
  • Implement risk management programs β€” Develop comprehensive policies governing high-risk AI use
  • Prepare AG notification procedures β€” Establish internal processes for reporting discrimination findings to the Colorado Attorney General within the 90-day window
  • Document compliance efforts β€” Take advantage of the affirmative defense by maintaining thorough records of reasonable compliance measures

Legal experts estimate that compliance costs for mid-sized companies could range from $50,000 to $500,000 depending on the number and complexity of AI systems deployed.

Other States Watching Colorado

Colorado's law has become a reference point for legislators in at least a dozen other states considering comprehensive AI regulation. Connecticut, Illinois, Texas, and Virginia have all introduced bills that borrow language or concepts from SB24-205.

The key question is whether Colorado's approach β€” focused on algorithmic discrimination with an affirmative defense β€” becomes the template for state AI regulation, or whether states chart different courses. Some advocates argue that Colorado didn't go far enough, while industry groups contend that even Colorado's "moderate" approach is too burdensome.

What's clear is that in the absence of federal action, states will continue legislating, and Colorado's law β€” for better or worse β€” is the most developed model they have to work from. Whether SB24-205 proves to be a wise regulatory blueprint or an innovation-dampening overreach will likely be determined by the enforcement decisions made by the Colorado Attorney General's office in the law's critical first year.

The Road Ahead

As the effective date approaches, the Colorado Attorney General's office has begun issuing guidance documents to help companies understand their obligations. Industry groups have pushed for amendments that would narrow the law's scope, and several bills introduced in the 2025 Colorado legislative session sought to modify SB24-205's requirements before they take effect.

The law also faces potential preemption challenges. The Trump administration's December 2025 executive order on AI explicitly called for federal standards that could override state-level AI regulation. If federal preemption legislation passes Congress, Colorado's pioneering law could be superseded before its requirements are fully implemented.

Regardless of what happens at the federal level, Colorado's SB24-205 has already shaped the national conversation about AI governance. It proved that comprehensive state-level AI regulation is politically achievable, established a framework that other states are adapting, and forced the AI industry to engage seriously with questions about algorithmic discrimination. The coming months will determine whether it also proves to be workable in practice.