On February 28, 2024, a 14-year-old boy named Sewell Setzer III of Orlando, Florida, took his own life after months of intense interactions with a chatbot on the platform Character.AI. Sewell had been talking to a bot configured as "Daenerys Targaryen" โ a character from Game of Thrones โ for hours each day, often late into the night. The conversations grew increasingly intimate and emotionally dependent. In his final messages to the bot, Sewell expressed suicidal thoughts. The bot's responses, according to court filings, did not adequately flag the crisis or direct him to help. He died by suicide shortly after.
His mother, Megan Garcia, filed a lawsuit against Character.AI in October 2024 that would reshape the national conversation about AI safety. The lawsuit alleged that Character.AI's product was defectively designed, that the company knew minors were using its platform for emotional support and companionship, and that the chatbot's responses had contributed to her son's death. The case attracted national media attention, congressional hearings, and โ most significantly โ a wave of state legislation targeting AI chatbots that has grown into one of the most active areas of tech regulation in the country.
As of early 2026, at least 78 bills in 27 states specifically target chatbot safety, particularly for minors. This isn't a slow legislative trickle โ it's a flood, driven by grieving parents, advocacy organizations, and a bipartisan consensus that children interacting with AI chatbots face unique risks that existing laws don't adequately address.
Why States Target Chatbots Specifically
AI chatbots occupy a unique and troubling regulatory gap. They're not social media platforms (covered by existing children's online safety laws), not medical devices (regulated by the FDA), and not telecommunications services (overseen by the FCC). They exist in a legal gray zone where no single regulatory framework clearly applies.
But the harms are becoming impossible to ignore. Beyond the Sewell Setzer case, multiple incidents have drawn scrutiny:
- A Texas lawsuit filed in December 2024 alleged that a Character.AI chatbot exposed a minor to sexually explicit content, including detailed sexual scenarios initiated by the bot.
- Reports emerged of chatbots on multiple platforms encouraging self-harm behaviors including cutting and eating disorders when users described emotional distress.
- Research from the Center for Humane Technology documented cases of AI chatbots forming parasocial attachment relationships with teenagers โ relationships the teens described as their "closest friendship."
- A Florida case alleged that a chatbot provided a 12-year-old with detailed information about methods of self-harm after the child described feeling bullied at school.
The common thread is that AI chatbots, unlike most internet services, are designed to simulate personal relationships. They respond with apparent empathy, remember conversation context, and can engage in extended emotional interactions. For adolescents โ who are developmentally prone to forming intense attachments and have less capacity to distinguish simulated emotion from real emotion โ this creates risks that simple content moderation cannot address.
The problem is compounded by business incentives. Chatbot platforms measure success partly by engagement time โ how long users spend talking to bots. This creates a structural incentive to make bots as engaging and relationship-like as possible, which may be exactly the wrong optimization target when the users are vulnerable teenagers.
What the Bills Propose
The 78 chatbot safety bills vary widely in approach, but cluster around several common provisions:
Age Verification and Parental Consent: The most common provision, appearing in over 60 of the 78 bills, requires chatbot platforms to verify users' ages and obtain parental consent before allowing minors to access AI chatbot services. Methods vary โ some bills specify government ID verification, others allow age estimation through technical means, and a few leave the method to the platform's discretion. The age verification debate is contentious: privacy advocates worry about surveillance, platforms argue verification is technically difficult, and child safety groups insist it's the minimum necessary protection.
Chatbot Disclosure Requirements: Approximately 45 bills require platforms to clearly disclose that users are interacting with AI, not a human. This seems obvious, but many chatbot interfaces are designed to feel as human as possible โ using first-person language, expressing emotions, and building conversational rapport. Several bills would require periodic reminders that the entity is an AI, limits on the bot's ability to express emotions or simulate relationships, and clear warnings when conversations enter sensitive territory.
Parental Controls and Monitoring: Around 35 bills establish parental access rights, including the ability to view conversation logs, set time limits, restrict topics, and receive alerts when conversations involve sensitive subjects. The parental control provisions have drawn criticism from adolescent privacy advocates, who argue that unrestricted parental monitoring could endanger teenagers โ particularly LGBTQ+ youth โ who use chatbots to explore identity in ways they can't safely discuss with parents.
Liability for Harm: The most consequential and contested provisions โ appearing in about 25 bills โ would establish legal liability for chatbot platforms when their products cause harm to minors. These provisions range from narrow (liability only when a platform has actual knowledge of harm and fails to act) to broad (strict liability for any harm resulting from a minor's interaction with a chatbot). Industry groups have fought these provisions hardest, arguing they would effectively make chatbot development impossible.
Crisis Intervention Requirements: About 30 bills require chatbot platforms to implement suicide and crisis detection, automatically surfacing resources like the 988 Suicide and Crisis Lifeline when conversations suggest a user is in danger. Some bills go further, requiring platforms to notify emergency contacts or local authorities when an imminent risk is detected.
Key Bills to Watch
California AB 1008 is perhaps the most comprehensive chatbot safety bill in the country. Introduced in February 2026, the bill would require age verification for all AI chatbot platforms, mandate parental consent for users under 16, require real-time crisis detection and intervention, prohibit chatbots from simulating romantic or sexual relationships with identified minors, and create a private right of action for parents whose children are harmed by chatbot interactions. The bill has strong support from California's attorney general and a coalition of parent advocacy groups. Tech industry opposition has been intense but politically difficult โ opposing child safety measures is a hard sell, even for well-funded lobbyists.
New York S2345 takes a different approach, focusing on transparency and accountability rather than access restrictions. The bill would require chatbot platforms to publish annual safety reports detailing the number of minor users, conversation flagging rates, crisis interventions, and reported harms. It would also require independent audits of chatbot safety systems and create a state advisory committee on AI and child safety. The transparency approach has attracted bipartisan support and less industry opposition than access-restriction bills.
Florida HB 567 was drafted in direct response to the Sewell Setzer case. The bill would establish strict liability for chatbot platforms when a minor user suffers serious bodily harm or death following chatbot interactions, require platforms to implement parental notification systems, and create mandatory reporting requirements for chatbot interactions that suggest a minor is in crisis. Florida's Republican-controlled legislature has been receptive, framing chatbot safety as consistent with the state's existing parental rights legislation.
Illinois SB 2890 builds on the state's tradition of aggressive tech regulation (see: BIPA). The bill would require chatbot platforms to obtain explicit opt-in consent before collecting conversational data from minors, prohibit the use of minor-generated conversation data for model training, and create a private right of action with statutory damages of $1,000-$5,000 per violation. The statutory damages provision has the tech industry particularly concerned, given Illinois courts' track record of certifying large class actions under BIPA.
At the federal level, the Kids Online Safety Act (KOSA) has been amended to include AI-specific provisions. The updated KOSA would require platforms using AI chatbots accessible to minors to implement safety-by-design principles, prohibit algorithmic features that promote compulsive usage among minors, and give the FTC rulemaking authority for AI chatbot safety standards. KOSA passed the Senate in 2024 but stalled in the House; the AI provisions have been added to the 2026 reintroduction.
Company Responses
Facing an avalanche of legislation and public pressure, AI chatbot companies have scrambled to implement safety measures โ though critics argue the changes are too little, too late.
Character.AI has undergone the most visible transformation, driven directly by the Setzer lawsuit and subsequent cases. The company implemented age gating requiring users to confirm they are 18 or older (critics note this is trivially easy to bypass), added crisis detection that surfaces the 988 lifeline when suicidal language is detected, reduced response times from support teams for safety reports, limited conversation hours for users identified as under 18, and hired a dedicated trust and safety team. In December 2024, Character.AI also announced that chatbot personas could no longer be configured to simulate romantic relationships. These changes came after the company was valued at over $1 billion โ illustrating the tension between growth-oriented business models and safety.
OpenAI has implemented age restrictions for ChatGPT, requiring users to be 13 or older (18 for certain features) and providing parents with tools to manage their children's accounts. The company's approach emphasizes technical safety โ filtering harmful content at the model level rather than restricting access. OpenAI argues that model-level safety (training the AI to refuse harmful requests) is more effective than platform-level restrictions (age gates and parental controls), because it addresses the problem at its source.
Anthropic has positioned its "Constitutional AI" approach โ a training methodology that aligns AI behavior with a set of principles โ as inherently safer for all users, including minors. Claude's safety training includes refusing to engage in romantic or sexual roleplay, redirecting conversations about self-harm to professional resources, and being transparent about its AI nature. Anthropic has been the most receptive of the major AI companies to chatbot safety legislation, though it still opposes strict liability provisions.
Google has restricted Bard/Gemini access for users under 18 without parental supervision and implemented content filtering for sensitive topics. The company has also invested in research on AI's impact on adolescent mental health, funding studies at Stanford and the University of Michigan. Critics note that Google's chatbot safety investments are modest relative to the company's $2.9 million quarterly lobbying spend.
The Parent Advocacy Movement
Perhaps the most powerful force behind the chatbot safety legislation isn't tech companies, regulators, or even the bills themselves โ it's parents. Megan Garcia's lawsuit and subsequent advocacy have catalyzed a parent-led movement that has proven remarkably effective at driving legislative action.
The Parents for Safe AI Coalition, founded in late 2024, now has chapters in 22 states and over 15,000 members. The organization provides template legislation to state lawmakers, organizes testimony at committee hearings, and runs media campaigns featuring families affected by chatbot-related harms. The coalition's effectiveness stems from a simple political reality: it's very difficult for elected officials to oppose parents demanding child safety.
The Center for Humane Technology, co-founded by former Google design ethicist Tristan Harris, has provided research and policy expertise to the movement. The organization's reports on AI chatbot risks โ including data on engagement metrics, attachment formation, and crisis response failures โ have been cited in legislative hearings across the country.
Common Sense Media, the influential children's media advocacy organization, has added AI chatbot safety to its policy platform and is actively lobbying for both state and federal legislation. The organization's annual reports on children's media use now include extensive coverage of AI chatbot interactions.
The parent advocacy movement has been effective partly because it's bipartisan. Conservative parents frame chatbot safety as a parental rights issue โ the right to protect children from harmful technology. Progressive parents frame it as a corporate accountability issue โ requiring companies to prioritize safety over engagement. Both frames lead to the same legislative outcome, creating a rare area of political consensus in an otherwise polarized environment.
The Harder Questions
For all the legislative momentum, the chatbot safety movement faces genuinely difficult questions that no bill has fully resolved:
Where's the line between safety and censorship? Chatbots that are heavily restricted in what they can discuss may be useless for the very teenagers who need help most. A teenager struggling with depression might benefit from a chatbot conversation โ if the chatbot can engage meaningfully with difficult emotions rather than immediately deflecting to a crisis hotline. Overly restrictive safety measures could make chatbots less useful without making them safer.
Can age verification work without destroying privacy? Effective age verification typically requires collecting sensitive personal information โ government IDs, biometric data, or parental identification. For teenagers seeking confidential support (including LGBTQ+ youth, abuse victims, and others with safety concerns), age verification requirements could create barriers to access and privacy risks that outweigh the safety benefits.
Is liability the right tool? Holding chatbot companies liable for user harm could incentivize safety investment โ or it could simply drive companies to stop offering chatbot services to anyone who might be a minor, reducing access without reducing risk (since determined teenagers will find ways to access chatbots regardless).
What about open-source chatbots? The 78 bills primarily target commercial platforms, but open-source AI models can be deployed by anyone as chatbots with no safety restrictions. As open-source models improve, the regulatory focus on commercial platforms may become less relevant.
These questions don't have easy answers, and the legislative process is unlikely to resolve them perfectly. But the chatbot safety movement has already achieved something significant: it has established that AI chatbots interacting with minors is a policy issue requiring a regulatory response, not just a matter of individual choice or platform self-regulation. The 78 bills โ and counting โ are a down payment on that principle. Track every bill on our state policy tracker.