Analysis/Deepfake Election Laws: 20 States and Counting
DeepfakesElections

Deepfake Election Laws: 20 States and Counting

How states are tackling AI-generated political content ahead of 2026 midterms

By The AI Lobby2025-03-285 min read
โœจ
AI Overview

20+ states introduced deepfake election laws in 2025, but definitions vary so widely that content banned in Texas may be legal in Florida.

At least 20 states have enacted or are considering laws specifically targeting AI-generated deepfakes in political campaigns and elections.

The Deepfake Election Problem

In January 2024, voters in New Hampshire received robocalls featuring a convincing AI-generated imitation of President Biden's voice urging them not to vote in the state's primary election. The incident โ€” traced to a Democratic political operative using commercially available AI voice-cloning technology โ€” cost less than $1,000 to execute and reached thousands of voters before being identified as fake. It was a proof of concept that terrified election officials across the country.

Since then, the threat landscape has only grown more alarming. AI-generated deepfakes of political candidates have appeared in races at every level of government, from city council contests to Senate campaigns. The technology required to create convincing fake audio, video, and images of real people has become cheaper, faster, and more accessible with each passing month. And the legal framework for addressing these threats remains dangerously fragmented.

20+ States Act, But Only 6 Succeed

In response to the deepfake threat, more than 20 states introduced legislation in 2024 and 2025 specifically targeting AI-generated content in political campaigns and elections. However, the legislative reality has been far less impressive than the headline numbers suggest: only 6 states have actually enacted deepfake election laws as of early 2026.

The states that have passed legislation include:

  • Texas โ€” The ELVIS Act (Ensuring Likeness Voice and Image Security Act), signed in 2024, makes it illegal to create and distribute deepfakes of real people without consent. While not exclusively focused on elections, it covers political deepfakes and creates criminal penalties.
  • Tennessee โ€” Passed one of the first state-level deepfake laws, the ELVIS Act (Ensuring Likeness Voice and Image Security), which protects individuals' right of publicity against AI-generated impersonation. Tennessee's law was originally motivated by the music industry (particularly the unauthorized use of AI to clone country music artists' voices) but applies broadly.
  • California โ€” Enacted AB 2655 and AB 2839, which require disclosure labels on AI-generated election content and restrict the distribution of materially deceptive AI content within 120 days of an election.
  • Minnesota โ€” Passed legislation requiring clear disclosure when AI-generated content is used in political advertising.
  • Michigan โ€” Enacted a law prohibiting the use of materially deceptive AI-generated media to influence an election within 90 days of the election.
  • Washington โ€” Passed disclosure requirements for AI-generated content in political ads, building on the state's existing campaign finance transparency framework.

The remaining 14+ states saw their deepfake bills die in committee, fail floor votes, or carry over to future legislative sessions. Common reasons for failure included First Amendment concerns, definitional challenges, industry opposition, and simple legislative bottlenecks as AI bills competed for limited floor time.

Types of Deepfake Election Bills

State deepfake election bills generally fall into several categories, each with different legal theories and enforcement mechanisms:

  • Disclosure requirements โ€” The most common and most likely to pass. These bills require political ads or campaign communications that use AI-generated content to include a clear label or disclaimer. They sidestep First Amendment issues by regulating disclosure rather than restricting speech.
  • Criminalization โ€” Some bills make the creation or distribution of election deepfakes a criminal offense, typically a misdemeanor with potential escalation to a felony for repeat offenses or particularly harmful deepfakes. These face more significant First Amendment challenges.
  • Right of publicity โ€” Bills like the Texas and Tennessee ELVIS Acts extend existing right-of-publicity protections to cover AI-generated likenesses. These are framed as property rights rather than speech restrictions, making them more legally durable.
  • Platform liability โ€” A smaller number of bills would hold social media platforms responsible for failing to remove or label known deepfakes of political candidates. These are the most controversial and face strong industry opposition.
  • Civil remedies โ€” Some bills create private rights of action, allowing candidates or individuals depicted in deepfakes to sue for damages. These are easier to pass than criminal provisions but harder to enforce in practice.

The Character.AI Effect: Child Safety Drives Broader AI Bills

While deepfake election laws have their own momentum, the broader push for AI content regulation has been turbocharged by high-profile incidents involving AI chatbots and minors โ€” most notably the cases involving Character.AI.

In 2024, a 14-year-old boy in Florida died by suicide after extensive interactions with a chatbot on the Character.AI platform. The incident generated national media coverage and prompted lawsuits against the company. Subsequently, additional cases emerged involving minors who developed unhealthy emotional dependencies on AI chatbots or were exposed to harmful content through AI platforms.

These incidents have become a powerful catalyst for AI legislation more broadly. Legislators who might not have prioritized deepfake election bills are now sponsoring AI regulation bills in response to constituent and parental pressure around child safety. The result is that child safety bills and election deepfake bills are often moving through the same committees and benefiting from the same political momentum.

At least 32 states have introduced bills specifically targeting AI chatbot safety for minors, and several states have bundled child safety provisions with deepfake and election integrity measures in omnibus AI bills.

The 2026 Midterms as Catalyst

The 2026 midterm elections have created genuine urgency around deepfake legislation. Control of the U.S. House and Senate, 36 governor's mansions, and thousands of state legislative seats will be at stake. Election officials and campaign strategists are operating under the assumption that AI-generated deepfakes will be a significant factor in the campaigns.

Several factors make 2026 particularly high-risk:

  • Technology accessibility โ€” AI voice-cloning tools that produce convincing results in under 30 seconds of sample audio are now freely available. Video deepfake tools have similarly improved and become cheaper to use.
  • Speed of dissemination โ€” A deepfake video posted to social media can reach millions of viewers before fact-checkers can identify and debunk it. In close races, a well-timed deepfake released in the final days before an election could be decisive.
  • Plausible deniability โ€” Campaigns can benefit from deepfakes produced by "independent" actors without direct coordination, making accountability difficult.
  • The "liar's dividend" โ€” Even without fake content, candidates can now dismiss real damaging audio or video as AI-generated, undermining the evidentiary value of authentic recordings.

Industry Lobbying for Narrow Definitions

The AI industry has not opposed all deepfake legislation โ€” outright opposition to election integrity measures is politically toxic. Instead, companies have lobbied for narrow definitions that limit the scope and impact of deepfake laws:

  • Defining "deepfake" to apply only to content that is wholly AI-generated, excluding AI-enhanced or AI-edited content that uses real footage as a base
  • Limiting laws to content that is "materially deceptive" โ€” a standard that can be difficult to prove and litigated extensively
  • Exempting satire, parody, and commentary โ€” reasonable on First Amendment grounds but creating loopholes that sophisticated actors can exploit
  • Placing enforcement burden on creators rather than platforms, reducing the obligations of social media companies to police AI-generated content
  • Supporting disclosure requirements over outright bans, arguing that labeled AI content is protected speech

The result is that even the deepfake election laws that have passed tend to be narrower than their sponsors originally intended. Meta, Google, and other platform companies have been particularly active in lobbying to limit platform liability provisions.

What's Still Missing

Despite the legislative activity, significant gaps remain in the legal framework for addressing election deepfakes:

  • No federal law โ€” Congress has introduced multiple bills (including the REAL Political Ads Act and the Protect Elections from Deceptive AI Act) but has not passed any. The FEC has issued advisories but lacks clear statutory authority.
  • Enforcement challenges โ€” Even in states with laws on the books, identifying the creators of deepfakes and prosecuting them before the election damage is done remains extremely difficult.
  • Cross-border content โ€” AI-generated deepfakes can be created in one jurisdiction (or country) and distributed in another, creating enforcement nightmares for state-level laws.
  • Real-time detection โ€” While California's SB 942 requires AI detection tools, no reliable real-time system exists for identifying and flagging deepfakes as they spread on social media.

As the 2026 midterms approach, the gap between the deepfake threat and the legal response remains significant. Track deepfake legislation across all 50 states on our Bill Tracker.