While Congress debates and states legislate, federal agencies are already enforcing. The Federal Trade Commission, Department of Justice, Securities and Exchange Commission, and state attorneys general have collectively brought dozens of enforcement actions targeting AI products, AI-related deception, and algorithmic harms.
No one has put them all in one place β until now. Our Enforcement Database tracks every action we can find. Here's what the data reveals.
The FTC: Most Active AI Enforcer
The Federal Trade Commission has been far and away the most aggressive federal agency on AI enforcement. Under both the Biden and current administrations, the FTC has used its existing authority over deceptive and unfair practices to target AI companies β without waiting for new AI-specific legislation.
Key FTC Actions:
- DoNotPay β $193,000 settlement (2024): The "robot lawyer" company claimed its AI could substitute for human lawyers, generate legal documents, and provide legal advice. The FTC found these claims deceptive β the AI couldn't actually do what DoNotPay advertised. The settlement included a $193,000 penalty and a requirement to notify customers that the product wasn't a substitute for legal counsel. This case established an important precedent: AI capability claims are marketing claims, and marketing claims must be truthful.
- accessiBe β $1,000,000 settlement (2024): This company sold AI-powered website accessibility tools, claiming they could make websites ADA-compliant automatically. The FTC found the AI tools didn't actually ensure compliance and, in some cases, made websites less accessible. The $1 million penalty was among the largest AI-specific FTC fines at the time. The case signaled that "AI-powered" claims don't exempt companies from the requirement that products actually work as described.
- Rite Aid β AI facial recognition ban (2023): The FTC banned Rite Aid from using AI facial recognition technology for five years after finding the company's system falsely identified customers as shoplifters, disproportionately affecting women and people of color. This was the FTC's first outright ban on a company's use of an AI technology β a significant escalation from fines and settlements.
- Evolv Technology β deceptive AI weapons screening (2024): Evolv marketed AI-powered weapons detection systems to schools, stadiums, and public venues. The FTC alleged the company overstated its system's capabilities, claiming it could reliably detect weapons when testing showed significant failure rates. The case highlighted the danger of AI overclaiming in safety-critical applications.
- NGL Labs β $5,000,000 penalty (2024): The anonymous messaging app used AI to generate fake messages, making users believe real people were sending them questions β then charging for a premium service to "reveal" who sent them. The $5 million penalty was the largest AI-related FTC fine to date. The case demonstrated that using AI to create deception (not just claim AI capabilities) falls squarely within FTC jurisdiction.
The DOJ: Algorithmic Pricing and Beyond
The Department of Justice has taken a different but equally significant approach, focusing on how AI algorithms can facilitate market manipulation and anti-competitive behavior:
RealPage β Algorithmic Pricing (2024βongoing):
The DOJ's case against RealPage is arguably the most important AI enforcement action in progress. RealPage provides AI-powered revenue management software used by landlords and property management companies to set rental prices. The DOJ alleges that the software effectively enables price-fixing by algorithm β landlords who are supposed to be competitors share pricing data through RealPage's system, and the AI recommends prices that are higher than what competitive markets would produce.
The implications extend far beyond housing. If the DOJ prevails, it establishes that algorithmic coordination can constitute illegal collusion even if the humans involved never directly communicated. This theory could apply to airline pricing, insurance, wages, and any other market where competitors use shared AI pricing tools.
The SEC: AI-Washing
The Securities and Exchange Commission has carved out its own AI enforcement niche: going after companies that exaggerate their use of AI to attract investors.
- Delphia and Global Predictions β $400,000 combined settlement (2024): These two investment advisors claimed their products used AI and machine learning to analyze data and make investment recommendations. The SEC found the claims were materially misleading β the companies either didn't use AI as described or exaggerated its role. The $400,000 combined penalty was modest, but the signal was clear: "AI-washing" β claiming AI capabilities you don't have to attract investors β is securities fraud.
The SEC has signaled more AI-washing cases are coming, particularly targeting companies that added "AI" to their marketing materials during the 2023-2024 AI hype cycle without materially changing their products.
State Attorneys General
State AGs have been increasingly active on AI enforcement, often in areas where federal agencies have been slower to act:
- Deepfake non-consent imagery: Multiple state AGs have brought actions under existing harassment and privacy statutes against creators and distributors of AI-generated non-consensual intimate imagery
- AI robocalls: State AGs in several states have filed actions against companies using AI voice cloning for fraudulent robocalls, particularly targeting elder fraud schemes
- Employment AI discrimination: New York City's Local Law 144, requiring bias audits of AI hiring tools, has led to several enforcement actions and settlements
- AI-generated consumer deception: State consumer protection divisions have targeted businesses using AI chatbots that impersonate humans without disclosure
Patterns in the Data
Looking across all enforcement actions, several patterns emerge:
- Deception is the primary theory: Most actions are based on companies lying about what their AI can do, not on the AI itself causing harm. Existing consumer protection law is doing most of the work.
- Penalties are still small: The largest AI-specific penalty ($5M for NGL Labs) is a fraction of what Big Tech companies earn in a single day. Until penalties scale with revenue, they're a cost of doing business.
- No comprehensive federal AI enforcement authority exists: Agencies are using existing statutes (FTC Act, securities law, antitrust) because Congress hasn't created AI-specific enforcement tools.
- State enforcement is filling gaps: Where federal agencies haven't acted β particularly on deepfakes and employment AI β states are stepping in.
What's Coming
Based on agency statements, open investigations, and regulatory signals, we expect the next wave of AI enforcement to target:
- AI-generated content in elections (FTC and FEC)
- AI discrimination in lending and insurance (CFPB and state regulators)
- AI safety claims by foundation model companies (FTC)
- More algorithmic pricing cases (DOJ Antitrust Division)
Explore every action in our Enforcement Database, filterable by agency, company, penalty amount, and outcome.