A Legislative Tsunami at the State Level
In 2025, state legislatures across the country introduced more than 700 AI-related bills across 45 states โ an unprecedented volume of legislative activity that reflects both the urgency policymakers feel about AI and the continued absence of comprehensive federal regulation. From algorithmic discrimination protections in Colorado to deepfake criminalization in Texas to transparency mandates in California, the states are building a regulatory patchwork that is becoming increasingly complex and, according to industry, increasingly unworkable.
The surge in state AI legislation isn't random. It's the direct result of a federal vacuum. Despite dozens of AI bills introduced in Congress and multiple high-profile hearings, the 118th Congress failed to pass a single comprehensive AI regulation bill. The 119th Congress, which convened in January 2025, faces the same dysfunction. In the absence of federal action, states have stepped in โ and they're not waiting for Washington.
Who's Introducing These Bills โ and Who Isn't
A Brookings Institution analysis of state AI legislation revealed a striking partisan pattern: approximately two-thirds of all state AI bills have been introduced by Democratic legislators. Republican lawmakers, while not absent from the conversation, have been significantly less active in proposing AI regulation.
Even more telling is the bipartisanship โ or lack thereof. Of the hundreds of AI bills introduced across all 50 states, only three had genuinely bipartisan co-sponsorship at the time of the Brookings study. This partisan divide mirrors the broader political divide on technology regulation: Democrats generally favor more regulation to protect consumers and address discrimination, while Republicans tend to prioritize innovation and resist what they see as government overreach.
The partisan gap has practical implications. In Republican-controlled legislatures, few AI regulatory bills advance. In Democrat-controlled legislatures, bills pass but may face gubernatorial vetoes or legal challenges. And in divided governments, AI regulation often stalls entirely.
Which Bills Actually Pass?
Not all AI bills are created equal, and the Brookings data reveals significant variation in passage rates based on the type of regulation proposed:
- "Responsible governance" bills โ those focused on transparency, impact assessments, and establishing AI advisory bodies โ pass at a rate of 38.6%, the highest of any category
- Deepfake and synthetic media bills โ pass at roughly 25-30%, often with bipartisan support
- Algorithmic discrimination bills โ pass at approximately 15%, reflecting industry resistance
- Comprehensive AI regulation bills โ pass at less than 10%, with Colorado's SB24-205 being the notable exception
- AI moratorium or ban bills โ virtually none pass, with a sub-5% success rate
The pattern is clear: the less restrictive and more procedural a bill is, the more likely it is to pass. Bills that create study committees or advisory boards sail through. Bills that impose substantive obligations on AI companies face stiff opposition and usually fail.
The Wide Spectrum of State Approaches
The diversity of state approaches to AI regulation is remarkable. At one end of the spectrum, Colorado has enacted the nation's most comprehensive AI law, requiring algorithmic impact assessments, consumer disclosures, and anti-discrimination protections for high-risk AI systems. At the other end, Texas has taken a deliberately hands-off approach, with Governor Greg Abbott emphasizing innovation and economic growth over regulation.
Some notable state approaches include:
- California โ A dual-track approach: Governor Newsom signed the AI Transparency Act (SB 942) requiring watermarks and detection tools, but vetoed the more ambitious SB 1047 safety bill. California leads in number of bills introduced but is selective about what it enacts.
- Illinois โ Focused on workplace AI, with the AI Video Interview Act (already in effect) and proposed legislation on algorithmic hiring discrimination. Illinois has been a pioneer in employment-specific AI regulation.
- Connecticut โ Modeled its approach closely on the EU AI Act, proposing a risk-tiered framework with a dedicated oversight body. One of the more ambitious state efforts.
- Virginia โ Passed the High-Risk AI Developer Duty of Care Act, requiring developers to use reasonable care to prevent foreseeable risks of algorithmic discrimination.
- Utah โ Took a business-friendly approach with the AI Policy Act, focusing on disclosure requirements rather than substantive restrictions. Often cited by industry as a model.
- New York โ NYC's Local Law 144, requiring bias audits for automated employment decision tools, was one of the first AI regulations in the country and has influenced state-level proposals.
Industry Pushback: The Case for Federal Preemption
The growing patchwork of state AI laws has become the AI industry's primary argument for federal legislation โ not because companies want more regulation, but because they want one set of rules instead of fifty. The industry's preferred outcome is a federal framework that is relatively permissive and that explicitly preempts (overrides) state laws.
The lobbying effort for preemption has been massive. According to federal lobbying disclosures, mentions of "preemption" in AI-related lobbying reports increased by over 300% between 2023 and 2025. Companies argue that:
- Compliance with 50 different state AI laws is operationally impossible and prohibitively expensive
- A patchwork of state rules will fragment the AI market and harm American competitiveness
- State legislators often lack the technical expertise to draft effective AI regulation
- Federal standards would provide clarity and consistency that benefits both companies and consumers
Consumer advocates and state attorneys general have pushed back hard, arguing that preemption is a euphemism for deregulation and that states should retain the ability to protect their residents from AI harms.
The Trump Executive Order on Preemption
In December 2025, the Trump administration issued an executive order titled "Maintaining American Leadership in Artificial Intelligence" that explicitly sought to preempt state-level AI regulation. The order directed federal agencies to identify state AI laws that conflict with federal priorities and to take steps to challenge them.
The executive order was widely seen as a victory for the AI industry, which had lobbied aggressively for federal preemption. However, its legal effect is limited โ an executive order cannot override state law without Congressional action, and the order's preemption claims face certain legal challenges from states that have already enacted AI legislation.
The order nonetheless sent a strong signal about the administration's priorities: innovation and competitiveness over regulation. It also gave industry lobbyists a powerful tool, allowing them to cite the executive order when arguing against state AI bills in legislatures across the country.
What This Means for Companies and Consumers
For companies deploying AI systems nationally, the current landscape is genuinely challenging. A hiring algorithm might need to comply with different requirements in Illinois, Colorado, New York City, and Connecticut โ each with different definitions, different assessment requirements, and different enforcement mechanisms. Multi-state compliance is complex and expensive.
For consumers, the picture is mixed. Residents of states with strong AI laws benefit from protections that don't exist in states that haven't acted. But the inconsistency means that the same AI system might be subject to oversight in one state and operate without any accountability in the state next door.
The AI Bill Tracker on this site monitors all active state AI legislation, providing real-time updates on bill status, amendments, and committee votes. As the 2026 legislative sessions progress, the patchwork will only grow more complex โ unless Congress finally acts to establish a national framework.