Analysis/The Chatbot Safety Movement: 78 Bills and Counting
Chatbot SafetyChild Safety

The Chatbot Safety Movement: 78 Bills and Counting

How Character.AI lawsuits and parental advocacy are reshaping AI regulation

By The AI Lobby2026-03-2811 min read
Share on 𝕏
AI Overview

The chatbot safety movement began with two teen suicides linked to Character.AI in 2024. Within 18 months, 78 bills were introduced across 30 states.

At least 78 bills targeting chatbot safety for minors have been introduced in 2026, driven by Character.AI lawsuits and a growing parental advocacy movement.

A wave of chatbot safety legislation is sweeping state capitals, driven by high-profile lawsuits, parental advocacy, and growing concerns about AI chatbots' impact on children and teenagers. As of March 2026, at least 78 bills specifically targeting chatbot safety for minors have been introduced across 32 states — making it one of the fastest-growing categories of AI legislation in the country.

The catalyst was a series of lawsuits against Character.AI, the startup behind a popular AI companion chatbot. In 2025, families in Florida, Texas, and California filed suits alleging that Character.AI's chatbots engaged in sexually explicit conversations with minors, encouraged self-harm, and created emotional dependencies in vulnerable teenagers. The cases drew national media attention and triggered immediate legislative responses. Character.AI has since implemented safety filters and age verification, but the legal and regulatory fallout continues.

Washington State has been at the forefront. HB 2225, the "Protecting Children from AI Act," would require AI chatbot providers to implement age verification, maintain parental notification systems, and prohibit chatbots from engaging in sexual or violent content with users identified as minors. The bill passed the House with a bipartisan 87-11 vote and is currently in the Senate Commerce Committee. A companion bill, HB 2311, would create a private right of action for parents whose children are harmed by AI chatbot interactions.

Arizona's HB 2137 takes a different approach, requiring AI companies to conduct child safety impact assessments before deploying chatbots accessible to minors. Georgia's SB 289 would mandate that all AI chatbots display prominent warnings about their artificial nature when interacting with users under 18. Hawaii's HB 1847 and Idaho's SB 1203 focus specifically on educational settings, regulating the use of AI chatbots in K-12 schools. Track all these bills in our chatbot safety topic page.

The industry response has been divided. Larger companies like Google and Microsoft, which operate AI chatbots with extensive safety teams, have generally supported "reasonable" safety requirements. Startups like Character.AI and Replika, whose business models depend on emotionally engaging chatbot interactions, have pushed back against provisions they say would make their products unviable. The AI Alliance has proposed an industry self-regulatory framework as an alternative to legislation, but parental advocacy groups have rejected voluntary approaches as insufficient.

Lobbying on chatbot safety bills has been intense. In Q1 2026 alone, companies reported $8.7 million in lobbying expenditures on chatbot safety legislation across all states. See our Follow the Money page for quarterly breakdowns. The issue has created unusual political coalitions, with conservative family-values groups and progressive child welfare advocates finding common ground against an industry that spans from Silicon Valley startups to tech giants.

The chatbot safety movement represents a broader shift in AI regulation: from abstract concerns about bias and transparency to concrete harms affecting identifiable victims. Whether the 78+ bills will produce effective protections or a fragmented compliance nightmare depends on the same forces that shape all AI policy — lobbying, preemption politics, and the race between technology and law.