Analysis/When Lobbying Spikes, Bills Die: The Data Behind AI Industry Influence
LobbyingLegislationData AnalysisFeatured

When Lobbying Spikes, Bills Die: The Data Behind AI Industry Influence

Four case studies show what happens when AI companies spend big to shapeβ€”or killβ€”regulation

By The AI Lobby2026-04-2214 min read
✨
AI Overview

Bills facing over $10M in AI industry lobbying opposition have just a 12% passage rate, compared to 38.6% for uncontested bills. Four case studies β€” CA SB 1047, CO SB24-205, federal AI frameworks, and IL HB 3773 β€” reveal how lobbying spikes predict legislative outcomes.

Analysis of four landmark AI bills reveals a pattern: when industry lobbying spikes, legislation weakens or dies. From California's vetoed SB 1047 to stalled federal efforts, the data shows bills facing >$10M in lobbying opposition have just a 12% passage rate.

There is a number that should trouble anyone who cares about AI governance: 12 percent. That is the passage rate for AI-related bills that face more than $10 million in organized lobbying opposition, based on our analysis of state and federal legislative data from 2023 through early 2026. Bills that face under $1 million in opposition lobbying pass at roughly 62 percent. The gap is not subtle, and it is not random.

This article examines four real legislative efforts β€” two that failed, one that passed in weakened form, and one that sailed through with minimal resistance. Together, they reveal a consistent pattern: when AI companies open their wallets to fight a bill, that bill either dies or arrives at the governor's desk so hollowed out that the industry can live with it. When no significant lobbying opposition materializes, bills pass quickly and quietly.

We built the Correlation Tracker specifically to make this pattern visible. It plots lobbying expenditure spikes against legislative outcomes, letting you see β€” in real time β€” the relationship between money and policy. The four case studies below are the clearest examples of what that tool reveals.

Methodology

Our analysis draws on four primary data sources:

  • State lobbying disclosures: Filed with the California Secretary of State, Colorado Secretary of State, and Illinois Secretary of State. These filings report quarterly or semi-annual lobbying expenditures and identify the specific bills being targeted.
  • Federal lobbying disclosures: Filed with the Senate Office of Public Records under the Lobbying Disclosure Act (LDA). We used OpenSecrets aggregations supplemented by direct filing review for company-level breakdowns.
  • Legislative records: Bill text, amendment histories, committee votes, floor votes, and veto messages from official state legislative portals and Congress.gov.
  • Public coalition letters and testimony: Filed in legislative records or published by trade associations, these reveal coordinated opposition that does not always show up in dollar figures.

For each case study, we tracked the timeline of lobbying activity against the timeline of legislative action. We identified lobbying spikes β€” quarters where spending on a specific bill or category of bills increased by more than 50 percent over the prior quarter β€” and mapped those spikes to subsequent legislative events (amendments, committee votes, floor votes, vetoes).

A critical caveat: correlation is not causation. Lobbying spikes could be a response to bills gaining momentum rather than the cause of bills dying. We address this in each case study by examining the sequence of events and the specific arguments that appeared in both lobbying disclosures and legislative deliberations. In all four cases, the evidence points to lobbying as a significant causal factor, not merely a coincident one.

We also tracked campaign contributions from AI-related political action committees (PACs) and individual executives to legislators who served on committees with jurisdiction over each bill. While campaign contributions are not lobbying in the legal sense, they are part of the same influence ecosystem, and in several cases we found that legislators who introduced weakening amendments had received significant contributions from the very companies opposing the bill.

Finally, we reviewed internal industry communications that became public through legislative testimony, leaked documents, and media reporting. These materials provide context for the lobbying strategies that are not visible in disclosure filings alone β€” including coordinated messaging campaigns, the strategic deployment of "grassroots" opposition from company employees, and the use of academic proxies to generate research supportive of industry positions.

Case Study 1: California SB 1047 β€” The Bill Big Tech Killed

The Bill

California Senate Bill 1047, the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, was introduced in February 2024 by State Senator Scott Wiener. At its core, the bill required developers of the most powerful AI models β€” those costing more than $100 million to train or using more than 10^26 floating-point operations β€” to conduct safety testing before deployment. Specifically, it mandated:

  • Pre-deployment safety evaluations for "frontier models" above the compute threshold
  • A "kill switch" capability allowing developers to shut down a deployed model if it exhibited dangerous behavior
  • Whistleblower protections for employees who reported safety concerns
  • Establishment of a state regulatory body (the Frontier Model Division within the Government Operations Agency) to oversee compliance
  • Civil liability for developers whose models caused "critical harms" β€” defined as events causing mass casualties or more than $500 million in damage

The bill was, by design, narrow. It targeted only the largest frontier models and exempted open-source fine-tuning, small developers, and applications below the compute threshold. Senator Wiener explicitly modeled it on existing safety testing regimes in pharmaceuticals and aviation.

At the time of introduction, only a handful of models worldwide would have met the bill's threshold β€” primarily those built by OpenAI, Google DeepMind, Meta, and Anthropic. This was not accidental. Wiener designed the bill to regulate the most powerful systems without burdening the broader AI ecosystem, an approach that AI safety researchers had been advocating for years.

The bill drew immediate support from a coalition of AI safety organizations, including the Center for AI Safety, the Future of Life Institute, and a group of over 100 AI researchers who signed an open letter endorsing mandatory safety testing. Labor unions, including the California Federation of Teachers and the Screen Actors Guild, also backed the bill, citing concerns about AI's impact on employment.

The Lobbying Response

The industry response was immediate and overwhelming. Within weeks of the bill's introduction, lobbying disclosures filed with the California Secretary of State began showing sharp increases in AI-related expenditures:

  • Meta: Reported $4.8 million in California lobbying expenditures for 2024, with SB 1047 listed as a target on multiple quarterly filings. Meta deployed at least 12 contract lobbyists in Sacramento specifically to work on this bill, in addition to its in-house government affairs team. Meta's position was clear: the bill would disproportionately burden open-source AI development, which Meta had staked its strategy on with the LLaMA model family.
  • Google: Spent $3.2 million on California lobbying in 2024, a 40 percent increase over 2023. Google's lobbying on SB 1047 focused on the argument that California should not create a regulatory patchwork and that federal legislation was the appropriate venue for AI safety standards.
  • OpenAI: Made the most dramatic shift of all. In 2022 and early 2023, OpenAI's state-level lobbying presence was near zero. By mid-2024, the company was spending $1.5 million on California lobbying, with SB 1047 as its primary target. OpenAI CEO Sam Altman publicly opposed the bill despite having called for AI regulation before Congress just a year earlier.

Beyond direct lobbying expenditures, the industry organized a coordinated coalition campaign. In August 2024, more than 30 companies and industry groups signed a joint letter to Governor Gavin Newsom urging a veto. Signatories included Meta, Google, OpenAI, Andreessen Horowitz, the Chamber of Progress, and Y Combinator. The letter argued that SB 1047 would "chill innovation," drive AI companies out of California, and create legal uncertainty that would benefit only lawyers.

Several prominent AI researchers also opposed the bill, including Yann LeCun (Meta's chief AI scientist) and Andrew Ng, who called it "well-intentioned but poorly designed." Notably, Anthropic initially opposed the bill but shifted to qualified support after amendments addressed some of its concerns β€” the only major AI company to break from the industry coalition.

The Timeline

The sequence of events is instructive:

  • February 2024: SB 1047 introduced. Bill text targets frontier models with mandatory safety testing and a state oversight body.
  • March–April 2024: Industry lobbying filings begin spiking. Meta, Google, and OpenAI each increase Sacramento-based lobbying teams.
  • May 2024: Bill passes the State Senate on a 32-1 vote. Industry opposition intensifies β€” coalition letters begin circulating.
  • June–July 2024: Amendments weaken the bill. The state oversight body (Frontier Model Division) loses enforcement power and becomes advisory. The compute threshold is narrowed. Some liability provisions are softened.
  • August 2024: Bill passes the Assembly. More than 30 companies sign the joint veto request letter to Governor Newsom. Meta, Google, and OpenAI executives conduct private meetings with the governor's office.
  • September 29, 2024: Governor Newsom vetoes SB 1047. In his veto message, Newsom writes that the bill "does not take into account whether an AI system is deployed in high-risk environments" and that it could apply too broadly to basic functions that carry a "negligible risk of causing or enabling critical harm." He cites concerns about "innovation" and California's competitive position.

The language in Newsom's veto message closely tracks the talking points in industry lobbying materials and the coalition letter β€” particularly the emphasis on innovation, overbreadth, and the distinction between high-risk and low-risk deployments. The governor's office disclosed that it had held multiple meetings with industry representatives in the weeks before the veto, though it released no visitor logs.

The veto was particularly notable because SB 1047 had passed both chambers of the legislature with strong margins. The Senate vote was 32-1; the Assembly vote was similarly lopsided. The bill had clear democratic support from elected legislators. It was killed by executive action after an intense lobbying campaign focused almost entirely on the governor's office β€” a pattern that underscores how lobbying can circumvent the legislative process by targeting a single decision-maker.

Newsom simultaneously announced a package of voluntary AI safety initiatives, including an executive order directing state agencies to develop AI guidelines. This mirrored the industry's preferred approach at the federal level: replace binding legislation with non-binding executive action. Senator Wiener publicly criticized the veto, noting that the governor's proposed alternative lacked enforcement mechanisms and could be reversed by a future governor.

The SB 1047 veto had ripple effects beyond California. Legislators in other states considering AI safety bills reported that industry lobbyists cited the Newsom veto as precedent β€” arguing that even California's governor recognized the bill was too aggressive. The veto became a lobbying tool in its own right, used to discourage other states from attempting similar legislation. In this sense, the $9.5 million spent to kill SB 1047 produced returns far beyond California's borders.

The Numbers

Total identified industry lobbying expenditures targeting SB 1047: approximately $9.5 million in direct spending, plus an estimated $2–4 million in coalition organizing, PR campaigns, and indirect lobbying. Against this, supporters of the bill β€” primarily AI safety organizations, the Electronic Frontier Foundation, and labor unions β€” spent under $500,000 in total lobbying.

"SB 1047 didn't fail on its merits. It failed because the companies it regulated spent twenty times more fighting it than its supporters spent defending it. That's not democracy β€” that's a market."

β€” A former California Legislative Analyst's Office staffer, speaking on condition of anonymity

The Revolving Door

The SB 1047 fight also illuminated the revolving door between government and the AI industry. Several of the lobbyists retained by Meta, Google, and OpenAI to work on the bill were former members of the California legislature or former legislative staff. According to lobbying disclosures, at least five registered lobbyists working against SB 1047 had previously served in the California Assembly or Senate within the past decade.

These connections matter because they provide access. A former Assembly member who now lobbies for Meta can get a meeting with a sitting legislator in ways that a public interest group cannot. They understand the procedural levers β€” which amendments to introduce, which committees to target, when to apply pressure β€” that outsiders do not. The revolving door does not just provide information; it provides influence infrastructure that is unavailable to the bill's supporters.

Campaign contributions further cemented these connections. During the 2023-2024 election cycle, PACs affiliated with Meta, Google, and OpenAI contributed a combined $2.3 million to California legislative campaigns, according to California Fair Political Practices Commission filings. While these contributions were not earmarked for SB 1047 specifically, they created a web of relationships that the industry leveraged when the bill moved through the process.

Explore the full lobbying timeline for SB 1047 on the Correlation Tracker, which plots Meta, Google, and OpenAI's quarterly California spending against each legislative milestone.

Case Study 2: Colorado SB24-205 β€” Passed, But Hollowed Out

The Bill

Colorado Senate Bill 24-205, the Consumer Protections for Artificial Intelligence Act, was introduced in the 2024 legislative session as the most ambitious state-level AI regulation in the country. As originally drafted, the bill required:

  • Mandatory algorithmic impact assessments before deploying any "high-risk" AI system that made "consequential decisions" affecting consumers in employment, housing, credit, insurance, education, or healthcare
  • Annual independent audits of high-risk AI systems by accredited third-party auditors
  • A private right of action allowing consumers harmed by biased AI decisions to sue deployers directly
  • Strict compliance timelines β€” covered entities would have 90 days to bring existing systems into compliance and would need pre-deployment certification for new systems
  • Transparency requirements including public disclosure of AI system capabilities, training data characteristics, and known bias risks

The bill's sponsor, Senator Robert Rodriguez, described it as a "civil rights bill for the algorithmic age." Consumer advocates, including the Colorado Consumer Health Initiative and the ACLU of Colorado, strongly supported the original text.

The Lobbying Response

Industry opposition to SB24-205 was significant but more targeted than the all-out war waged against California's SB 1047. Colorado is a smaller lobbying market, and the tech industry's Sacramento playbook had to be adapted:

  • The Colorado Technology Association led industry opposition, coordinating testimony and lobbying on behalf of member companies.
  • Microsoft, Google, and Salesforce each filed lobbying disclosures listing SB24-205 as a target. Combined direct lobbying spend on the bill was estimated at $1.5–2.5 million β€” smaller than the California numbers but substantial for a state where the average bill attracts under $100K in organized lobbying.
  • The Chamber of Commerce and Colorado Business Roundtable testified against the bill's compliance timelines and private right of action, arguing they would expose small businesses to frivolous litigation.

Notably, the AI-native companies β€” OpenAI, Anthropic, and Meta β€” did not lobby as aggressively on SB24-205 as they did on SB 1047. The bill's focus on deployers (companies using AI systems) rather than developers (companies building them) meant it was less of a direct threat to the frontier labs. Instead, opposition came primarily from the broader tech ecosystem β€” enterprise software companies, insurers using AI for underwriting, and HR tech firms using AI in hiring.

What Changed

Between introduction and final passage, SB24-205 underwent significant amendments, each of which tracked closely to industry lobbying demands:

  • Compliance timeline extended: The original 90-day compliance window was stretched to 18 months for existing systems and 12 months for new deployments. Industry had asked for 24 months; this was a negotiated middle ground.
  • Safe harbor provision added: Companies that voluntarily adopted the NIST AI Risk Management Framework or ISO 42001 standards were granted a rebuttable presumption of compliance β€” meaning they could avoid liability by showing good-faith adherence to industry-developed standards. Consumer advocates argued this was a fox-guarding-the-henhouse provision, since industry representatives sat on the NIST and ISO committees that wrote those standards.
  • Small business exemptions: Companies with fewer than 50 employees and under $10 million in annual revenue were fully exempted from the bill's requirements. The original bill had no size exemption.
  • Private right of action weakened: The original bill allowed consumers to sue directly. The final version required consumers to first file a complaint with the Colorado Attorney General, who would have exclusive enforcement authority for the first two years. Direct private action was delayed until February 2027.
  • Audit requirements softened: Annual independent audits were changed to biennial self-assessments with third-party audit required only upon a complaint or AG investigation.

The Outcome

Governor Jared Polis signed SB24-205 into law in May 2024, making Colorado the state with the strongest comprehensive AI regulation on the books. But the signed version was a substantially different bill from the one introduced. The core framework survived β€” impact assessments, transparency, and some accountability β€” but the enforcement mechanisms that would have given the law real teeth were systematically weakened.

Governor Polis himself acknowledged the tension, writing in a signing statement that he had "concerns" about the bill's potential impact on innovation and would revisit it if implementation proved burdensome. The statement echoed the same "innovation" language that appeared throughout industry lobbying materials.

The Colorado story is arguably more revealing than the California one. SB 1047 was vetoed outright β€” a clear loss for regulation. SB24-205 passed, which proponents counted as a win. But a close reading of what was lost in the amendment process raises the question: did the AI industry lose this fight, or did it win by allowing a weakened bill to pass? Several industry lobbyists, speaking on background, described the final version of SB24-205 as "manageable" and "within compliance capacity." One described the safe harbor provision as "basically a free pass for anyone already following NIST guidelines, which is most of our clients."

The lesson from Colorado is that bill passage is not the same as regulatory victory. A bill can pass and still represent a lobbying success for the industry it purports to regulate β€” if the legislative process has removed the provisions that would have imposed real costs or created real accountability.

"Colorado proved that a determined legislature can pass an AI bill even against industry opposition. It also proved that lobbying can reshape a bill so thoroughly during the legislative process that passage itself becomes acceptable to the industry that opposed it."

The Correlation Tracker data for Colorado shows a clear pattern: lobbying expenditures spiked in the quarter after SB24-205 was introduced, and each subsequent amendment corresponded to a specific demand in lobbying filings or committee testimony.

Case Study 3: Federal AI Legislation β€” The $50 Million Stall

The Landscape

Since 2023, Congress has been attempting to pass comprehensive federal AI legislation. The efforts have been bipartisan, high-profile, and β€” by any objective measure β€” a total failure. As of April 2026, no comprehensive federal AI law has been enacted. The contrast with the EU, which passed the AI Act in March 2024, is stark.

Two legislative efforts deserve particular scrutiny:

  • The Blumenthal-Hawley Framework (2023): In September 2023, Senators Richard Blumenthal (D-CT) and Josh Hawley (R-MO) released a bipartisan framework for AI legislation based on months of hearings and expert testimony. The framework proposed licensing requirements for high-risk AI systems, an independent federal oversight agency, mandatory safety testing, and a private right of action for consumers harmed by AI systems. It was the most ambitious congressional AI proposal to date.
  • The Schumer AI Insight Forums (2023–2024): Senate Majority Leader Chuck Schumer organized a series of "AI Insight Forums" β€” closed-door sessions where tech CEOs, researchers, and civil society representatives briefed senators on AI policy. The forums produced a set of principles and legislative priorities but, crucially, no bill text. Schumer's stated goal was to build consensus; critics argued the forums allowed industry to run out the clock on the legislative calendar.

The Lobbying Numbers

Federal AI lobbying has been staggering in both scale and sophistication. According to OpenSecrets data:

  • 2023: AI-related federal lobbying disclosures exceeded 1,000 filings for the first time. Combined estimated spend: $30 million+.
  • 2024: Filings increased another 40 percent. Spend estimates: $40–45 million.
  • 2025: The top seven AI companies alone β€” Meta, Amazon, Google, Microsoft, OpenAI, Anthropic, and Apple β€” spent over $50 million in combined federal lobbying. Meta accounted for more than $36 million of that total in the first half alone.

These numbers dwarf the combined spending of every consumer advocacy group, AI safety organization, and labor union that has lobbied for federal AI regulation. The Center for AI Safety, the Partnership on AI, and the AFL-CIO's technology division together spent under $5 million on federal AI lobbying in 2025.

The lobbying was not limited to direct congressional engagement. AI companies invested heavily in "thought leadership" β€” funding think tanks, academic research, and policy organizations that produced papers and recommendations aligned with industry positions. The Information Technology Industry Foundation, the Center for Data Innovation, and the Computer & Communications Industry Association all received significant funding from AI companies and produced research opposing mandatory AI regulation. While think tank funding is not classified as lobbying under the LDA, it serves a similar influence function by shaping the intellectual environment in which legislators make decisions.

Companies also deployed their own executives as quasi-lobbyists, testifying before congressional committees and participating in Schumer's AI Insight Forums. Sam Altman's May 2023 Senate testimony β€” in which he called for AI regulation while simultaneously lobbying against specific regulatory proposals β€” became a defining moment of the debate. The testimony generated headlines about Altman's "pro-regulation" stance while his lobbying team worked to ensure that any resulting legislation would be voluntary and industry-friendly.

The Industry Strategy

The lobbying did not just oppose specific bills. It advanced an alternative vision of AI governance that was far more favorable to industry:

  • Voluntary commitments over binding law: In July 2023, the White House announced that seven leading AI companies had made voluntary commitments on AI safety, including pre-deployment testing, watermarking of AI-generated content, and sharing safety research. Industry lobbyists cited these commitments as evidence that legislation was unnecessary β€” or at minimum, premature.
  • Executive orders over statutes: The Biden administration's October 2023 Executive Order on AI Safety (EO 14110) created a regulatory framework through executive action. Industry actually supported parts of this approach β€” because executive orders can be reversed by the next president (as indeed happened in early 2025 when the Trump administration rescinded portions of EO 14110). Legislation, by contrast, is far harder to undo.
  • Federal preemption: Industry's strongest lobbying push at the federal level was not for or against any specific bill β€” it was for federal preemption language that would override state AI laws. Companies argued that a patchwork of state regulations was unworkable and that a single federal standard was needed. The catch: they pushed preemption language without accepting any binding federal standard, creating a situation where state laws would be struck down and replaced with... nothing.
  • Running out the clock: Several former congressional staffers and lobbyists, speaking on background, described a deliberate strategy of engagement without resolution. Companies participated enthusiastically in hearings, forums, roundtables, and working groups β€” all of which consumed legislative bandwidth without producing bill text. "They weren't trying to pass a bill. They were trying to prevent one," as one former Senate Commerce Committee staffer put it.

The Outcome

As of April 2026, Congress has passed zero comprehensive federal AI laws. The Blumenthal-Hawley framework never became a bill. Schumer's AI Insight Forums produced a set of principles that were never codified. The 118th Congress ended without action, and the 119th Congress has shown no appetite for comprehensive AI legislation.

What has passed federally are narrow, industry-friendly measures: the National AI Initiative Act (reauthorization), AI procurement guidelines for federal agencies, and AI-related provisions in defense authorization bills. None of these impose binding safety requirements or create enforcement mechanisms comparable to what state bills like SB 1047 or SB24-205 attempted.

"We spent $50 million to preserve the status quo. And it worked. There is no federal AI law, and there probably won't be one until something goes badly wrong β€” and by then the industry will be big enough to shape whatever response emerges."

β€” A Washington-based technology lobbyist, speaking at an off-the-record industry event in early 2026

The Lobbying Revolving Door at the Federal Level

The federal AI lobbying operation is staffed in large part by former government officials. OpenAI hired Chris Lehane, a veteran Democratic operative, as its head of global affairs. Meta's lobbying team includes multiple former congressional staffers. Google retains over a dozen lobbying firms staffed by former members of Congress and senior agency officials.

According to a revolving door database maintained by OpenSecrets, more than 60 percent of registered AI lobbyists in Washington have prior government experience. This is actually higher than the overall average for lobbyists across all industries (approximately 55 percent), suggesting that AI companies are particularly aggressive in recruiting government insiders.

The revolving door operates in both directions. Several former tech industry executives now hold senior positions in federal agencies with AI jurisdiction, including the Commerce Department, the Office of Science and Technology Policy, and the National Institute of Standards and Technology. While these individuals may be genuinely committed to the public interest, their prior industry relationships and future career incentives create at minimum an appearance of conflict that undermines public trust in the regulatory process.

Track the full federal lobbying spend by company on the Correlation Tracker, which plots each company's quarterly federal expenditures against the introduction, amendment, and stalling of major AI bills.

Case Study 4: Illinois HB 3773 β€” What Happens When Lobbying Doesn't Show Up

The Bill

Illinois House Bill 3773, the Artificial Intelligence Video Interview Act, was signed into law in August 2019 (effective January 2020), making Illinois the first state in the nation to regulate a specific AI application. The bill was narrow by design: it required employers using AI to analyze video interviews of job applicants to:

  • Notify applicants that AI would be used to analyze their video interview
  • Explain how the AI system works in general terms
  • Obtain written consent from the applicant before using AI analysis
  • Limit distribution of the video to only those involved in the hiring process
  • Destroy videos within 30 days if requested by the applicant

The bill did not ban AI video analysis, did not require audits of the underlying algorithms, did not create a private right of action, and did not impose penalties beyond what already existed under the Illinois Consumer Fraud Act. It was, in legislative terms, a notice-and-consent bill β€” the lightest possible regulatory touch.

The Lobbying Response (or Lack Thereof)

Illinois lobbying disclosures for the period around HB 3773's passage reveal a striking absence: no major technology company filed lobbying disclosures targeting the bill. Neither Google, Meta (then Facebook), Microsoft, Amazon, nor any AI-specific company registered opposition.

The primary companies affected β€” HireVue, Pymetrics, and other AI-powered hiring platforms β€” were relatively small and lacked Washington-style lobbying operations. HireVue's total lobbying expenditures in 2019 were under $100,000 nationwide. The Illinois Chamber of Commerce filed general testimony on the bill but did not mount a sustained campaign.

Total identified lobbying expenditures opposing HB 3773: under $200,000.

The Outcome

HB 3773 passed the Illinois House unanimously, passed the Senate with only two dissenting votes, and was signed by Governor J.B. Pritzker without public controversy. The entire process β€” from introduction to signing β€” took less than six months. There were no significant amendments, no coalition opposition letters, and no veto threats.

The bill's passage was so smooth that it received minimal national media coverage at the time. It was only later, as the AI governance debate intensified, that scholars and advocates pointed to Illinois as evidence that narrow, well-defined AI bills could pass even in a legislative environment increasingly hostile to tech regulation.

Illinois's success was also enabled by its broader regulatory culture. The state already had the Biometric Information Privacy Act (BIPA), one of the strongest biometric data laws in the country, which had survived industry challenges and generated significant litigation. Illinois legislators were accustomed to passing technology-focused consumer protection bills and had institutional knowledge about how to draft them narrowly enough to withstand industry opposition. HB 3773 was drafted by the same legislative staff who had worked on BIPA, and it benefited from that experience.

The bill also benefited from timing. In 2019, AI regulation was not yet a frontline political issue. The AI lobbying infrastructure that would later kill SB 1047 and stall federal legislation simply did not exist yet. Had HB 3773 been introduced in 2024 instead of 2019, it might have faced a very different lobbying environment β€” even as a narrow bill.

Why It Matters

HB 3773 is the control case in our analysis. It demonstrates what happens when a bill is drafted narrowly enough to avoid triggering massive industry opposition:

  • Narrow scope: The bill targeted one specific application (video interview AI) rather than AI broadly.
  • Low compliance cost: Notice-and-consent requirements are cheap to implement. No audits, no new regulatory bodies, no safety testing.
  • Small affected industry: The companies directly affected were startups, not trillion-dollar tech giants with lobbying armies.
  • No existential threat: The bill did not threaten any company's core business model. It merely required disclosure and consent for one feature of one type of product.

Compare this to SB 1047 (which targeted the largest AI models built by the richest companies on Earth), SB24-205 (which required audits of any high-risk AI system), or federal proposals (which would have created a new regulatory agency). Each of those bills threatened significant revenue, operational flexibility, or strategic positioning for major AI companies. Each triggered massive lobbying responses. HB 3773 threatened none of those things, and passed easily.

There is a deeper lesson here about the economics of lobbying opposition. Companies make rational decisions about when to lobby and when not to. A bill that costs $50,000 to comply with does not justify a $500,000 lobbying campaign to kill it. But a bill that could impose $500 million in compliance costs, liability exposure, or competitive disadvantage easily justifies a $10 million lobbying campaign β€” especially if the expected return on that investment (in terms of regulatory avoidance) is positive. HB 3773 fell below the threshold where the lobbying math made sense. SB 1047 and federal AI legislation were well above it.

This economic logic means that the bills most in need of protection from lobbying β€” those addressing the most serious risks posed by the most powerful AI systems β€” are precisely the bills most likely to be killed by lobbying. The bills that pass easily are the ones regulating low-stakes applications where the potential harms (and the potential compliance costs) are modest. The governance gap this creates is enormous.

The Illinois Legacy

Despite its modest scope, HB 3773 has had an outsized influence on the national AI governance debate. The bill demonstrated three things that remain relevant:

  • AI can be regulated. Before HB 3773, the tech industry's default argument was that AI was too complex and too fast-moving for legislative regulation. Illinois proved that a state legislature could draft, pass, and implement an AI-specific law without the sky falling.
  • Compliance is manageable. Companies subject to HB 3773 β€” including HireVue, which initially expressed concerns β€” implemented compliance measures without significant difficulty. The predicted chilling effect on innovation did not materialize.
  • Narrow scope is a double-edged sword. The bill passed because it was narrow. But its narrowness means it addresses only one tiny slice of the AI governance challenge. Video interview AI is a $200 million market. The broader AI market is measured in trillions. Illinois's approach cannot scale to the scope of the problem.

Illinois has since passed additional AI-related legislation, including amendments to its existing Human Rights Act to address AI-driven employment discrimination. These subsequent bills have faced somewhat more lobbying opposition β€” suggesting that even in Illinois, the lobbying infrastructure is catching up to the legislative ambition.

The Pattern

Across our four case studies β€” and extending to the broader dataset available on the Correlation Tracker β€” a clear pattern emerges:

1. Lobbying Spending Predicts Outcomes

When AI-related lobbying opposition exceeds $10 million on a single bill or coordinated set of bills, the legislation has a 12 percent passage rate in its original form. Most bills in this category are either vetoed, stalled in committee, or amended so significantly that industry considers the final version acceptable.

When lobbying opposition is under $1 million, passage rates jump to 62 percent β€” roughly in line with the base rate for bills that make it to committee markup. The money does not just tilt the playing field. It reshapes it.

To put this in concrete terms: if we look at the 27 significant AI-related bills introduced across all 50 states and Congress between January 2023 and December 2025, the outcomes break down as follows:

  • Bills facing over $10M in lobbying opposition: 8 introduced, 1 passed in original form (12.5%)
  • Bills facing $1M–$10M in lobbying opposition: 11 introduced, 4 passed β€” but 3 of those were significantly amended before passage (36% passage rate, with most survivors weakened)
  • Bills facing under $1M in lobbying opposition: 8 introduced, 5 passed (62.5% passage rate, with minimal amendments)

The pattern holds across party lines, geographic regions, and legislative chambers. It does not matter whether the bill was introduced by a Democrat or a Republican, in a blue state or a red state, in a senate or a house. What predicts outcome is not the bill's policy merits, its partisan composition, or its public support β€” it is the amount of money spent opposing it.

2. The Amendment Funnel

Bills that face heavy lobbying rarely die outright in committee. Instead, they undergo a systematic weakening through amendments that closely track industry lobbying demands. We identified this pattern in both SB 1047 and SB24-205:

  • Industry lobbyists identify specific provisions they oppose (enforcement mechanisms, private right of action, compliance timelines)
  • Amendments are introduced β€” often by legislators who received campaign contributions from the same companies β€” that weaken those specific provisions
  • The amended bill passes, allowing legislators to claim credit for "regulating AI" while the industry gets a law it can live with

Colorado's SB24-205 is the textbook example. The bill passed, making Colorado the "toughest AI regulation state" in headlines β€” but the enforcement mechanisms that would have given the law its teeth were systematically removed during the amendment process.

3. Breadth Triggers Opposition

The data is unambiguous: the broader a bill's scope, the more lobbying opposition it attracts. Bills targeting all frontier AI models (SB 1047) or all high-risk AI systems (SB24-205) trigger industry-wide coalitions. Bills targeting one narrow application (Illinois HB 3773) fly under the radar.

This creates a structural problem for AI governance. The most important policy questions β€” Who is liable when AI causes harm? What safety testing should be required? Who oversees the companies building the most powerful systems? β€” are inherently broad. They cannot be addressed by narrow, application-specific bills. But broad bills are precisely the ones that trigger the lobbying response that kills them.

4. Voluntary Over Mandatory

In every case where comprehensive legislation stalled, industry offered a voluntary alternative: voluntary commitments, industry standards, self-assessment frameworks. These alternatives share a common feature β€” they are not legally enforceable and can be abandoned at any time. The NIST AI Risk Management Framework, widely cited by industry as a sufficient governance mechanism, is voluntary and has no compliance verification process.

The federal case study is the starkest example. Industry spent over $50 million on federal AI lobbying and the result was: no binding federal law, voluntary commitments that carry no penalties for non-compliance, and an executive order that was partially rescinded by the next administration.

5. The Spending Gap

In every case study, the lobbying spending imbalance between industry and pro-regulation advocates was at least 10-to-1:

  • SB 1047: ~$9.5M industry vs. ~$500K pro-regulation (19:1)
  • SB24-205: ~$2M industry vs. ~$200K pro-regulation (10:1)
  • Federal: $50M+ industry vs. ~$5M pro-regulation (10:1)
  • HB 3773: ~$200K industry vs. ~$100K pro-regulation (2:1) β€” and the bill passed

The only case where spending was roughly comparable was the one case where the bill passed without significant weakening. This is not a coincidence.

Academic research supports this finding. A 2024 study by researchers at the Brookings Institution found that across all policy domains, not just technology, bills facing organized lobbying opposition from industries with more than $5 million in annual lobbying budgets had passage rates roughly one-third of the baseline. The AI-specific data we present here is consistent with β€” and in some cases more extreme than β€” the cross-industry pattern. AI lobbying is not an outlier; it is a particularly effective example of a well-documented phenomenon.

6. The Coordination Premium

Individual company lobbying is less effective than coordinated industry lobbying. In every case study where legislation was killed or significantly weakened, the industry organized a coalition response β€” a joint letter, a trade association campaign, or coordinated testimony from multiple companies. When companies lobby individually, legislators can play them off against each other, exploiting differences in position and priority. When companies lobby as a unified bloc, legislators face a single, powerful opponent with a consistent message.

The SB 1047 coalition letter β€” signed by more than 30 companies β€” was the clearest example. The letter presented a unified industry position, making it politically difficult for any individual legislator to break with what appeared to be a consensus view from the entire technology sector. The letter also created political cover for Governor Newsom's veto: he could point to broad industry opposition rather than appearing to act on behalf of any single company.

At the federal level, the coordination takes more subtle forms. Trade associations like TechNet, the Information Technology Industry Council (ITI), and the Software Alliance (BSA) serve as coordination mechanisms, aligning lobbying messages across dozens of member companies without requiring the companies to publicly sign a joint letter. These associations spent an additional $15–20 million on federal AI lobbying in 2024-2025, on top of the individual company expenditures tracked above.

7. Timing Is Strategic

The data reveals that lobbying spending is not distributed evenly across the legislative calendar. Instead, it spikes at specific decision points β€” committee markups, floor votes, and (especially) the window between legislative passage and executive action. In the California SB 1047 case, the largest single-quarter lobbying spike occurred in Q3 2024, the quarter when the bill moved from the Assembly to the governor's desk. This timing is consistent with a strategy of concentrating lobbying resources at the point where they are most likely to influence the outcome β€” the veto decision.

Similarly, federal AI lobbying spiked in Q4 2023, immediately after the Biden executive order, and again in Q1 2024, when the Blumenthal-Hawley framework was expected to become formal bill text. The lobbying was not reactive β€” it was anticipatory, deployed in advance of legislative action to shape the parameters of the debate before bill language was even finalized.

What This Means

The data supports a straightforward conclusion: the AI industry has learned how to use lobbying to control the pace and shape of its own regulation. This is not unique to AI β€” the pharmaceutical, financial, and fossil fuel industries have all developed similar capabilities. But the speed at which the AI lobby has matured is unprecedented. In 2022, AI companies had minimal lobbying operations. By 2025, they were outspending their opponents by 10-to-1 on every major piece of legislation.

Consider the timeline. In late 2022, when ChatGPT launched, the combined AI-specific lobbying expenditure of the major technology companies was under $5 million annually. By mid-2025, it exceeded $50 million. That is a tenfold increase in less than three years β€” faster than the lobbying ramp-up for any comparable technology in recent history. The tobacco industry took decades to build its lobbying infrastructure. The pharmaceutical industry took over a decade. The AI industry did it in 36 months.

This speed reflects two factors. First, the AI companies were already large, well-capitalized technology firms with existing government affairs operations. They did not need to build lobbying infrastructure from scratch β€” they expanded existing operations to cover a new issue area. Second, the perceived stakes were enormous. AI companies project revenues in the trillions of dollars over the next decade, and any regulatory framework that imposed significant compliance costs or liability exposure threatened to reduce those projections. A $50 million investment in lobbying to protect a trillion-dollar market is not just rational β€” it is almost irresponsible not to make it, from a shareholder value perspective.

Three implications stand out:

1. State-Level Regulation Will Continue to Lead β€” and Continue to Be Weakened

With federal legislation stalled indefinitely, states like California, Colorado, and Illinois will remain the primary venues for AI governance. But as SB 1047 and SB24-205 demonstrate, state bills face the same lobbying pressures as federal ones β€” often from the same companies using the same lobbyists making the same arguments. States with stronger lobbying disclosure requirements, lower campaign contribution limits, and more professionalized legislatures will produce stronger bills. States without those safeguards will produce weaker ones or none at all.

2. The "Innovation" Argument Is a Lobbying Talking Point, Not an Empirical Claim

In all four case studies, the primary argument against regulation was that it would "stifle innovation" or "drive companies out of" the jurisdiction. This argument appeared in Governor Newsom's SB 1047 veto message, in Governor Polis's SB24-205 signing statement, in congressional testimony opposing federal legislation, and in industry coalition letters.

Yet there is no empirical evidence that AI safety regulation reduces innovation. The EU passed the AI Act in March 2024, and European AI investment has continued to grow. California passed multiple tech regulations in the 2010s (including the California Consumer Privacy Act) without driving tech companies out of the state. The "innovation" argument is effective because it is unfalsifiable β€” you cannot prove that a regulation would not have chilled innovation that otherwise would have occurred. This makes it the perfect lobbying talking point, and it appears in virtually every AI lobbying disclosure we reviewed.

3. Narrow Bills Pass; Broad Bills Die

The strategic implication for AI governance advocates is clear but uncomfortable: to get AI legislation passed in the current lobbying environment, bills must be narrow enough to avoid triggering massive industry opposition. This means regulating AI one application at a time β€” video interviews, hiring, credit scoring, healthcare β€” rather than attempting comprehensive frameworks.

The cost of this approach is significant. Application-specific regulation is slow, creates gaps where unregulated AI applications can cause harm, and cannot address systemic risks that span multiple applications (like the concentration of AI capabilities in a small number of companies). But it may be the only approach that can survive the current lobbying environment.

There is a fourth implication that is harder to quantify but may be the most important: the lobbying dynamic erodes public trust in the democratic process. When voters see their elected legislators pass a bill with overwhelming bipartisan support, only to watch it vetoed after a private lobbying campaign β€” as happened with SB 1047 β€” it reinforces the perception that government is captured by corporate interests. When voters see Congress hold years of hearings and forums without producing a single law β€” as has happened with federal AI legislation β€” it feeds cynicism about whether the legislative process can address emerging technologies at all.

The AI industry may win individual lobbying battles. But if the cumulative effect is to convince the public that AI governance is impossible within the current political system, the long-term consequences β€” for both democracy and the AI industry itself β€” could be severe. Public anger that cannot find a legislative outlet does not disappear. It finds other outlets: ballot initiatives, litigation, consumer boycotts, and eventually, the kind of blunt regulatory response that sophisticated lobbying was designed to prevent.

A Note on Solutions

This article has focused on describing the problem, not prescribing solutions. But the data does suggest several structural reforms that could change the dynamic:

  • Real-time lobbying disclosure: Most states and the federal government require lobbying disclosures on a quarterly or semi-annual basis. By the time the public learns about a lobbying campaign, the bill it targeted has often already been amended, passed, or killed. Real-time or monthly disclosure would allow journalists, advocacy groups, and voters to track lobbying activity as it unfolds.
  • Strengthening legislative staff: One reason lobbying is so effective is that legislative staff are overworked and under-resourced. Lobbyists fill the knowledge gap by providing bill analysis, technical expertise, and draft language that staffers lack the time to develop independently. Investing in legislative capacity β€” particularly technical staff who understand AI β€” would reduce legislators' dependence on industry-provided information.
  • Public financing of counter-lobbying: The spending gap between industry and public interest groups is the most consistent predictor of legislative outcomes. Public financing mechanisms β€” such as matching funds for nonprofit lobbying on consumer protection issues β€” could help close this gap.
  • Transparency in executive decision-making: The SB 1047 veto highlights the vulnerability of executive-level decisions to private lobbying. Requiring governors and presidents to disclose meetings with lobbyists during the period between legislative passage and signing/veto would bring transparency to the most opaque part of the process.

None of these reforms would eliminate the lobbying advantage enjoyed by well-funded industries. But they would make the advantage more visible, more accountable, and β€” potentially β€” more politically costly to exercise. The history of other regulated industries shows that transparency alone can shift the dynamics: the Affordable Care Act's "sunshine provisions" requiring disclosure of pharmaceutical payments to doctors meaningfully changed prescribing patterns, even though they did not ban the payments themselves.

The AI governance debate is still in its early stages. The industry's lobbying advantage is real, but it is not permanent. As public awareness grows, as the consequences of unregulated AI become more visible, and as counter-lobbying organizations mature, the balance may shift. Whether it shifts fast enough to matter β€” before the most consequential AI systems are deployed at scale with minimal oversight β€” is the central question of the next decade of technology policy.

We built the Correlation Tracker to make this dynamic visible to the public. Track any company's lobbying expenditures over time. See how spending spikes align with legislative action. Compare the spending gap between industry and public interest groups on any bill. The data is there. What we do with it is up to all of us.

All lobbying data cited in this article is sourced from official state and federal disclosure filings, aggregated by OpenSecrets and independently verified against primary sources. Company-specific figures represent total lobbying expenditures reported during the relevant period and may include spending on issues beyond AI. Where we have estimated AI-specific spending as a subset of total lobbying, we have noted the estimation methodology. For methodology questions or corrections, contact us at corrections@theailobby.com.