Anthropic
AI safety company and developer of the Claude family of AI assistants. Founded by former OpenAI researchers, Anthropic positions itself as a safety-first AI lab. Rapidly escalated lobbying in 2025-2026, outspending OpenAI for the first time in Q1 2026, with a major push into healthcare AI and government procurement.
Anthropic spent $1.6M on federal AI lobbying in Q1 2026, with $4.0M in total tracked spending. The company is active in 3 states and focuses on ai safety standards and alignment, healthcare ai deployment and regulation, government and defense ai procurement.
Federal Lobbying by Quarter
Public Position
Supports thoughtful, technically-informed AI regulation. Advocates for mandatory safety evaluations of frontier models and transparency requirements. Uniquely among major AI labs, has publicly endorsed specific regulatory proposals and called for binding safety standards rather than purely voluntary commitments.
Policy Focus Areas
Key Lobbyists
Bills Lobbied On
- 📋S. 2714 (SAFE Innovation Framework Act)
- 📋California SB 1047 (Safe and Secure Innovation for Frontier AI Models Act)
- 📋H.R. 3369 (National AI Commission Act)
- 📋S. 4178 (Healthcare AI Accountability Act)
- 📋DOD AI procurement and acquisition reform provisions
Notable Actions
- →Outspent OpenAI in federal lobbying for the first time in Q1 2026
- →Achieved 344% year-over-year increase in lobbying spend from Q1 2025 to Q1 2026
- →Hired Ballard Partners to pursue DOD and Pentagon AI procurement contracts
- →Filed lawsuit against Trump administration over alleged blacklisting from government contracts
State Lobbying Activity
Campaign Contributions
Influence Score
🚨 Revolving Door
Moved from leading AI lab to co-founding competitor; deep ties to AI policy ecosystem