Analysis/OpenAI's Lobbying Spend Jumps 7x: Inside the Strategy Shift
LobbyingCompanies

OpenAI's Lobbying Spend Jumps 7x: Inside the Strategy Shift

From safety advocate to policy heavyweight

By The AI Lobby2025-04-106 min read
✨
AI Overview

OpenAI went from zero lobbyists in 2022 to 35 registered lobbyists by 2025. Their strategy shifted from β€˜regulate us’ to β€˜regulate our competitors.’

OpenAI dramatically increased its federal lobbying expenditure, signaling a strategic pivot as regulation debates intensify in Washington.

From Nonprofit Lab to Washington Power Player

In 2021, OpenAI spent approximately $200,000 on federal lobbying β€” a rounding error by Washington standards, roughly what a mid-sized trade association spends in a quarter. By 2025, that figure had jumped to over $1.5 million, a 7x increase that mirrors the company's transformation from a research-focused nonprofit into one of the most valuable and controversial companies in the technology industry.

The spending surge isn't just about money. It reflects a fundamental shift in how OpenAI engages with government β€” from an organization that positioned itself as a champion of AI safety and responsible development to one that actively fights state regulations, hires former government officials, and deploys the same lobbying playbook used by the tech giants it once criticized.

The Lobbying Spend in Context

OpenAI's $1.5 million in annual federal lobbying puts it far behind the industry's biggest spenders. Meta spent $53.9 million on AI-related lobbying over the same period, Amazon spent $35.2 million, and Google spent $23.5 million. But the rate of increase β€” 7x in roughly three years β€” signals a company that is rapidly scaling its political operations to match its commercial ambitions.

The spending increase coincides with several key milestones in OpenAI's corporate evolution:

  • November 2022 β€” Launch of ChatGPT, which made OpenAI the most visible AI company in the world
  • January 2023 β€” Microsoft's $10 billion investment, transforming OpenAI's financial position and corporate relationships
  • November 2023 β€” The board crisis that temporarily ousted CEO Sam Altman, highlighting governance tensions between safety and commercial interests
  • 2024-2025 β€” Accelerating commercial growth, enterprise partnerships, and the push toward artificial general intelligence (AGI)
  • 2025-2026 β€” Formal transition from a nonprofit structure to a for-profit corporation, completing the commercial transformation

Building a Government Affairs Machine

Lobbying dollars are only part of the story. OpenAI has built a sophisticated government affairs operation by hiring more than a dozen former government officials and Congressional staffers in recent years. Key hires have included:

  • Former Congressional staffers with deep relationships on committees that oversee technology policy
  • Former officials from the National Security Council and Department of Defense, reflecting OpenAI's growing interest in government AI contracts
  • Former Federal Trade Commission (FTC) and Commerce Department personnel with regulatory expertise
  • State-level government affairs specialists focused on the growing number of state AI bills

The company has also retained multiple outside lobbying firms to supplement its in-house team, a standard practice for companies navigating complex regulatory environments. These firms provide access to additional networks of former officials, committee staff, and agency personnel.

The SB 1047 Fight: A Case Study in the Shift

Perhaps no single episode better illustrates OpenAI's evolution from safety advocate to industry lobbyist than its campaign against California's SB 1047, the AI safety bill authored by State Senator Scott Wiener.

SB 1047 would have required developers of large AI models to implement safety testing protocols, maintain the ability to shut down models that pose catastrophic risks, and assume liability for certain types of severe harm caused by their models. On paper, these were the kinds of safety measures that OpenAI's founding mission statement β€” "to ensure that artificial general intelligence benefits all of humanity" β€” would seem to support.

Instead, OpenAI lobbied aggressively against the bill. The company argued that:

  • The bill's safety requirements were technically unworkable and based on speculative risks rather than demonstrated harms
  • California-specific regulation would fragment the regulatory landscape and disadvantage California-based companies
  • The bill's liability provisions would create a chilling effect on AI research and development
  • Federal legislation, not state law, was the appropriate venue for AI safety regulation

OpenAI's lobbying contributed to Governor Newsom's decision to veto SB 1047 in September 2024. The veto was a major victory for the AI industry and a defining moment in OpenAI's political transformation. The company that was founded to ensure AI safety had successfully killed a state AI safety bill.

Sam Altman's Congressional Testimony vs. Lobbying Reality

In May 2023, OpenAI CEO Sam Altman testified before the Senate Judiciary Committee in what became one of the most-watched Congressional hearings of the year. His testimony was widely praised for its apparent candor and willingness to engage with regulatory questions. Key statements included:

  • "I think if this technology goes wrong, it can go quite wrong. And we want to be vocal about that. We want to work with the government to prevent that from happening."
  • "We think that regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models."
  • "I would love for the US government to lead and develop some sort of licensing or registration regime."

These statements positioned Altman and OpenAI as pro-regulation β€” a welcome contrast to the tech industry's typical posture of resisting government oversight. But the company's subsequent lobbying actions have told a different story:

  • OpenAI fought against SB 1047, the most significant state AI safety bill ever proposed
  • The company has lobbied for federal preemption of state AI laws β€” effectively arguing for one set of (lighter) rules instead of letting states set their own standards
  • OpenAI has pushed for voluntary commitments rather than binding regulations, participating in the White House's July 2023 AI commitments while opposing legislative mandates that would make similar commitments legally enforceable
  • The company has lobbied for narrow definitions of AI that would exclude many of its products from regulatory scope
  • OpenAI has opposed open-source mandates and model-sharing requirements that would reduce its competitive advantages

The gap between Altman's Congressional testimony and OpenAI's lobbying positions has become a frequent talking point for AI regulation advocates, who cite it as evidence that the industry's public support for regulation is performative.

Anthropic Overtakes OpenAI: A New Dynamic

In a development that surprised many Washington observers, Anthropic β€” the AI safety company founded by former OpenAI executives β€” outspent OpenAI on federal lobbying for the first time in Q1 2026. Anthropic reported $1.6 million in lobbying expenditures compared to OpenAI's $1.5 million.

The milestone is symbolically significant. Anthropic was founded explicitly as a safety-focused alternative to OpenAI, and its founders left OpenAI in part due to concerns about the company's commercial direction. That Anthropic is now outspending OpenAI in Washington suggests that even the most safety-oriented AI companies have concluded that political engagement is essential.

However, Anthropic's lobbying posture differs from OpenAI's in important ways. Anthropic has been more supportive of certain regulatory measures, particularly those focused on safety testing and transparency. The company has positioned its lobbying as advocacy for "smart regulation" rather than against regulation per se β€” though critics note that the practical effect of Anthropic's preferred policies would still be significantly lighter than what many advocates want.

What OpenAI's Evolution Tells Us

OpenAI's transformation from a safety-focused nonprofit spending $200K on lobbying to a commercial juggernaut spending $1.5M+ is a microcosm of the broader AI industry's political evolution. The pattern is familiar from previous technology waves:

  • Phase 1: Indifference β€” The technology is new, the companies are small, and Washington isn't paying attention. Lobbying spending is minimal.
  • Phase 2: Proactive engagement β€” As the technology gains attention, companies position themselves as responsible actors who welcome thoughtful regulation. Testimony is offered, voluntary commitments are signed.
  • Phase 3: Defensive lobbying β€” As actual legislation threatens to impose real costs and constraints, companies shift to fighting specific bills, pushing for preemption, and advocating for industry-friendly frameworks.
  • Phase 4: Entrenchment β€” The company's lobbying operation becomes permanent, its positions harden, and its spending stabilizes at a level necessary to maintain influence.

OpenAI appears to be transitioning from Phase 3 to Phase 4. The company has built a permanent government affairs infrastructure, established positions on key policy issues, and demonstrated a willingness to fight regulation that threatens its business model β€” even regulation framed in the language of safety that OpenAI once championed.

Track OpenAI's lobbying disclosures and compare them to other AI companies on our Spending Tracker.