Analysis/State vs Federal: The Great AI Regulation Tug-of-War
State PolicyFederalAnalysis

State vs Federal: The Great AI Regulation Tug-of-War

With 700+ state bills and zero comprehensive federal laws, the battle over who gets to regulate AI is becoming a constitutional showdown

By The AI Lobby2026-04-2113 min read
โœจ
AI Overview

Federal preemption would override 200+ state AI bills overnight. Industry groups have spent $18M lobbying specifically for preemption language.

A December 2025 executive order asserts federal authority over state AI regulation, but states aren't backing down. With 700+ state AI bills and no federal law, the fight over who regulates AI is heading toward a constitutional confrontation.

In December 2025, the Trump administration dropped a regulatory bombshell: an executive order asserting broad federal authority over artificial intelligence and explicitly seeking to preempt state-level AI regulations. The order, titled "Maintaining American Leadership in Artificial Intelligence," declared that a fragmented state regulatory landscape poses an unacceptable burden on innovation and directed federal agencies to identify and challenge state AI laws that conflict with federal priorities. It was the most aggressive assertion of federal AI authority to date โ€” and states immediately pushed back.

The executive order didn't emerge in a vacuum. By December 2025, state legislatures had introduced over 700 AI-related bills across all 50 states, with at least 40 signed into law. Meanwhile, Congress had failed to pass a single comprehensive federal AI statute. The contrast was stark: states were acting while Washington debated. The executive order was the federal government's attempt to reassert control over a policy space that states had filled by default.

But the legal and political dynamics of AI preemption are far more complex than a single executive order can resolve. The fight over who regulates AI touches on foundational constitutional questions, deep partisan divides, and competing visions of how to govern a technology that doesn't respect state borders.

The Constitutional Framework

The preemption debate implicates two of the most contested areas of constitutional law: the Commerce Clause and the Tenth Amendment.

The Commerce Clause gives Congress the power to regulate interstate commerce, and courts have interpreted this broadly to allow federal regulation of activities that substantially affect commerce across state lines. AI clearly qualifies โ€” models are trained on data from across the country and world, deployed through cloud infrastructure that crosses state borders, and used by businesses operating nationally. The federal government argues that state AI regulations burden interstate commerce by forcing companies to comply with conflicting requirements across jurisdictions.

The Tenth Amendment, however, reserves powers not delegated to the federal government to the states. States have historically regulated consumer protection, employment discrimination, healthcare, insurance, and education โ€” all areas where AI is now being deployed. State attorneys general argue that regulating AI's impact on their residents is a core state function, not a federal prerogative.

The legal question is whether an executive order alone can preempt state law. The answer, most constitutional scholars agree, is no โ€” at least not directly. Federal preemption of state law generally requires an act of Congress, either through explicit preemption language in a statute or through a regulatory framework so comprehensive that it "occupies the field." An executive order can direct federal agencies to challenge state laws in court or to develop regulatory frameworks that could create implied preemption, but it cannot override state legislatures by fiat.

This means the real preemption battle will play out in Congress and the courts โ€” and it's already beginning.

Colorado SB24-205: The Test Case

Colorado's SB24-205, signed into law in May 2024 and set to take effect in June 2026, has become the primary test case for state AI regulation. The law is the most comprehensive state-level AI statute in the country, requiring developers and deployers of "high-risk AI systems" to conduct algorithmic impact assessments, implement risk management practices, provide consumer disclosures, and allow individuals to appeal AI-driven decisions that affect them.

The law defines high-risk AI broadly: any system that makes or substantially contributes to consequential decisions about employment, education, financial services, healthcare, housing, insurance, or legal services. This covers a vast swath of commercial AI applications โ€” from hiring algorithms to credit scoring to insurance underwriting.

Industry response has been fierce. The BSA | The Software Alliance, ITI, the U.S. Chamber of Commerce, and individual companies including Microsoft, Google, and Salesforce have all lobbied against SB24-205, arguing it creates compliance burdens that are impossible for companies operating nationally. The tech industry's legal strategy is expected to challenge the law on Commerce Clause grounds, arguing that Colorado's requirements effectively regulate AI development that occurs outside the state.

Colorado's Attorney General Phil Weiser has signaled he will defend the law aggressively. In a January 2026 statement, Weiser argued that "Colorado residents deserve protection from algorithmic discrimination regardless of whether the federal government acts," and noted that the state's consumer protection authority is well-established constitutional ground. The case, if it reaches federal court, could set precedent for state AI regulation nationwide.

California SB 1047: The Veto That Changed Everything

California's SB 1047, vetoed by Governor Gavin Newsom in September 2024, remains the most politically significant AI bill never enacted. The bill would have imposed safety requirements on frontier AI models costing over $100 million to train, including pre-deployment safety testing, a "kill switch" capability, and liability provisions for catastrophic harms. It was the most ambitious state attempt to regulate foundation models themselves โ€” not just their applications.

The veto exposed deep fractures in the AI regulatory debate. SB 1047 had support from AI safety researchers, civil society organizations, and some AI companies (notably Anthropic, which cautiously endorsed the bill). But it faced overwhelming opposition from the broader tech industry, including Meta, Google, and a coalition of venture capitalists and startup founders who argued the bill would drive AI companies out of California.

Newsom's veto message cited concerns that the bill's focus on model size (training cost) was a poor proxy for actual risk, and that California should not set standards for a technology with global implications without federal coordination. But the subtext was unmistakable: the AI industry's lobbying campaign, which included direct appeals from tech CEOs, an open letter signed by hundreds of AI researchers, and aggressive media campaigns, had succeeded in framing the bill as anti-innovation.

The SB 1047 veto galvanized both sides. AI safety advocates pointed to it as evidence that industry lobbying can block even popular regulation โ€” polls showed strong public support for the bill's provisions. Industry groups cited it as proof that states recognize the need for a federal approach. In 2026, California legislators introduced multiple successor bills addressing specific AI harms rather than attempting comprehensive regulation โ€” a strategic shift born from the SB 1047 experience.

The State Pushback

Despite the federal preemption order, states have not slowed down. If anything, the executive order has accelerated state action, as legislatures rush to pass AI bills before federal preemption could potentially nullify them.

California has introduced over 50 AI-related bills in 2026 alone, targeting specific applications including deepfakes in elections (AB 2655), chatbot safety for minors (AB 1008), and AI in healthcare decisions. The state's piecemeal approach reflects lessons from the SB 1047 veto: narrower bills are harder to oppose on "innovation" grounds.

Illinois has been particularly aggressive. Building on its existing Biometric Information Privacy Act (BIPA) โ€” which has generated billions in settlements against tech companies โ€” Illinois has introduced bills requiring AI impact assessments for employment decisions (HB 3773), transparency in AI-generated content (SB 2890), and restrictions on AI in insurance underwriting. Illinois' strong plaintiff-friendly legal environment makes it a particularly challenging jurisdiction for AI companies.

New York has taken a different approach, focusing on algorithmic accountability in specific sectors. The state's Local Law 144, requiring bias audits for automated employment decision tools, went into effect in 2023 and has been expanded by subsequent legislation. New York's City Council and state legislature have introduced bills targeting AI in housing (algorithmic tenant screening), financial services, and law enforcement (facial recognition restrictions).

Colorado, in addition to SB24-205, continues to expand its AI regulatory framework. The state's approach has become a model for other legislatures, with its emphasis on impact assessments, transparency, and individual rights providing a template that a dozen other states have adapted.

Other states are finding their own niches. Texas has focused on AI in energy and grid management. Virginia is targeting data center regulation. Connecticut passed an AI transparency bill modeled on the EU AI Act. Utah created an AI regulatory sandbox. The diversity of state approaches is exactly what industry fears โ€” and exactly what federalism advocates celebrate.

Industry's Preference: One Standard to Rule Them All

The tech industry's position is straightforward: a single federal standard is better than 50 different state standards. This argument has genuine merit. An AI model deployed nationally shouldn't have to comply with contradictory requirements across states. Compliance costs for navigating a patchwork of state laws could be enormous, particularly for smaller companies without large legal teams.

But the industry's preferred federal standard is invariably weaker than the strongest state proposals. When tech companies lobby for a "national framework," they typically mean one built on voluntary standards (like NIST guidelines), broad liability protections, limited enforcement mechanisms, and explicit preemption of state laws. The resulting federal law would effectively create a ceiling rather than a floor โ€” preventing states from imposing stronger protections, not ensuring a minimum baseline.

This dynamic is not new. The tech industry used the same strategy with data privacy, successfully preventing states from exceeding the (nonexistent) federal standard for decades. It was only when states โ€” led by California's CCPA and CPRA โ€” began passing their own privacy laws that Congress felt pressure to act. Even now, the federal privacy debate is stalled partly because the industry insists on preemption provisions that would override California's stronger protections.

The AI debate may follow a similar trajectory. State action creates pressure for federal legislation, but the federal bill that eventually passes may weaken protections rather than strengthen them. The SAFE Innovation Framework Act (S. 2714) includes preemption provisions that would override state AI laws in areas covered by the federal framework โ€” and the industry is lobbying to make those coverage areas as broad as possible.

The Innovation Labs Argument

Federalism advocates make a compelling counter-argument: states serve as "laboratories of democracy," experimenting with different regulatory approaches to see what works. In the AI context, this means Colorado's comprehensive approach, Illinois' enforcement-heavy model, Utah's regulatory sandbox, and California's sector-specific bills all generate real-world data about what effective AI regulation looks like.

Without state experimentation, federal lawmakers would be writing AI legislation in the dark โ€” relying on industry talking points and theoretical frameworks rather than evidence from actual regulatory experience. Colorado's SB24-205 implementation, which begins in June 2026, will provide the first large-scale test of comprehensive AI governance in the United States. That data is invaluable for federal policymakers considering their own legislation.

The innovation labs argument also highlights a timing problem. Congress moves slowly โ€” a comprehensive federal AI law is likely years away. In the meantime, AI systems are making consequential decisions about people's lives every day. States argue that their residents can't wait for Congress to act, and the Tenth Amendment gives them the authority to protect their citizens now.

Where This Goes

The state-federal AI regulation fight is heading toward multiple collision points in 2026 and 2027:

  • Colorado SB24-205 takes effect in June 2026, creating the first real compliance test for the industry and likely triggering legal challenges.
  • Federal court challenges to state AI laws on Commerce Clause grounds are expected by late 2026, with potential implications for all state AI regulation.
  • Congressional action on the SAFE Innovation Framework Act or similar legislation could advance in 2027, with preemption provisions as the central battleground.
  • The 2026 midterm elections will shape the political landscape for AI regulation, with both parties incorporating AI policy into their platforms.

The fundamental tension is unlikely to be resolved cleanly. The AI industry wants regulatory certainty and a single compliance target. States want to protect their residents and preserve their traditional regulatory authority. The federal government wants to assert leadership without actually passing legislation. And the technology itself keeps evolving faster than any government โ€” state or federal โ€” can regulate.

What's clear is that the status quo โ€” 700+ state bills, no federal law, and an executive order of uncertain legal authority โ€” is unsustainable. Something has to give. Track the state-by-state developments on our state tracker and follow the federal debate on our federal policy page.