SB 942: California's AI Transparency Framework
On September 19, 2024, California Governor Gavin Newsom signed SB 942, the California AI Transparency Act, into law. The bill, authored by State Senator Scott Wiener, establishes the most detailed transparency requirements for generative AI systems in the United States. Effective January 1, 2026 — meaning it is already in effect — SB 942 requires covered providers to implement content provenance measures, offer free AI detection tools, and maintain detailed public documentation about their AI systems.
The law applies to generative AI systems with 1 million or more monthly users or visitors, effectively targeting the largest AI providers: OpenAI, Google, Meta, Anthropic, Stability AI, Midjourney, and similar companies operating at scale.
What the Law Requires
SB 942's requirements fall into several categories, each designed to make AI-generated content more identifiable and AI systems more understandable:
Content Provenance and Watermarking:
- Covered providers must include provenance data in AI-generated content — metadata that identifies the content as AI-generated, which system created it, and when it was created
- For image, video, and audio content, providers must embed latent disclosure (watermarks that are not visible to the naked eye but can be detected by tools) that is "minimally detectable by humans" and "maximally robust" against removal or alteration
- For text content, providers must include provenance data in the output where technically feasible
- Provenance data must conform to open, widely-adopted standards — a nod to the Coalition for Content Provenance and Authenticity (C2PA) standard that many companies are already implementing
Free AI Detection Tools:
- Covered providers must offer a free, publicly accessible AI detection tool that allows users to determine whether content was created by their AI system
- The detection tool must be available without requiring users to create an account or pay a fee
- Providers must make reasonable efforts to ensure the tool can detect their AI-generated content even if it has been "minimally modified"
Public Documentation:
- Covered providers must maintain on their website detailed documentation about each generative AI system, including a high-level summary of the training data used, the system's intended purposes and limitations, and any evaluations of the system's performance and safety
- This documentation must be updated at least annually and whenever significant changes are made to the system
The Bill Newsom Signed vs. The Bill He Vetoed
The signing of SB 942 is perhaps most notable for what it reveals by contrast. Just weeks before signing the transparency bill, Governor Newsom vetoed SB 1047, a far more ambitious AI safety bill also authored by Senator Wiener. SB 1047 would have required developers of large AI models to implement safety testing protocols, maintain "kill switch" capabilities, and assume liability for catastrophic harms caused by their models.
Newsom's veto message explained his reasoning: SB 1047 applied "stringent standards to even the most basic functions" of AI models and could "stifle the very innovation that fuels advancement in favor of the well-resourced companies that can absorb costs." He argued the bill was too broad in scope and could drive AI development out of California.
The contrasting decisions reveal a clear policy calculus: transparency is acceptable; safety mandates are not. Or, more precisely, requirements that help users identify AI-generated content are politically and economically palatable, while requirements that restrict what AI systems can do or impose liability for AI harms cross a line for California's governor.
The AI industry's lobbying played a significant role in this outcome. OpenAI, Google, and Meta all lobbied aggressively against SB 1047, arguing it would harm California's AI ecosystem. They were notably quieter about SB 942 — not enthusiastic, but not launching the same kind of opposition campaign. Transparency requirements are cheaper and less constraining than safety mandates, and companies could credibly argue they were already moving toward content provenance voluntarily.
Industry Response: Compliance and Concerns
The industry response to SB 942 has been mixed but generally more accepting than the reaction to other state AI bills:
- OpenAI had already begun implementing C2PA metadata in DALL-E images before the law passed, positioning itself as ahead of the requirement. However, the company has expressed concerns about the detection tool mandate, noting that reliable detection of AI-generated text remains a technically unsolved problem.
- Google similarly pointed to its SynthID watermarking technology as evidence of existing compliance. Google's main concern has been around the "open, widely-adopted standards" requirement, which could force the company to adopt a third-party standard rather than its proprietary approach.
- Meta has been implementing AI labeling across its platforms but faces unique challenges because its open-source LLaMA models can be deployed by third parties who may not implement provenance features. Meta has argued that the law should place responsibility on deployers, not just developers.
- Anthropic has been publicly supportive of transparency measures in principle, consistent with its positioning as a safety-focused company. The company began implementing C2PA metadata in its Claude outputs in late 2025.
Technical Challenges and Limitations
While SB 942 represents a significant step toward AI transparency, it faces real technical limitations that could affect its impact:
- Text watermarking is unreliable — Unlike images and video, where watermarking technology is relatively mature, reliably watermarking AI-generated text without affecting quality remains an active research problem. Simple paraphrasing can defeat most text watermarking schemes.
- Metadata can be stripped — Provenance data embedded in images can be removed by converting file formats, taking screenshots, or using basic image editing tools. Social media platforms routinely strip metadata during upload and re-encoding.
- Detection tools have error rates — AI detection tools produce both false positives (labeling human-created content as AI-generated) and false negatives (failing to identify AI-generated content). These error rates raise concerns about the tools' reliability.
- Open-source models complicate enforcement — When AI models are released as open source (as Meta has done with LLaMA), downstream users can modify the models to remove watermarking or provenance features, creating enforcement challenges.
Enforcement and What Comes Next
SB 942 is enforced by the California Attorney General, who can bring actions against covered providers that fail to comply. The law does not create a private right of action, meaning individual consumers cannot sue providers directly for violations.
As the first major AI transparency law to take effect in the United States, SB 942 is being closely watched by other states and by Congress. Several federal transparency bills have borrowed language from SB 942, and at least six other states have introduced similar legislation.
The law is also a test case for whether transparency-focused regulation can meaningfully address concerns about AI-generated misinformation, deepfakes, and content authenticity without the more aggressive interventions (like the safety mandates in the vetoed SB 1047) that the industry has successfully resisted.
Track California's AI legislation and other state bills on our Bill Tracker.