Analysis/Who Regulates AI? A Guide to Federal Agencies
FederalAnalysisExplainer

Who Regulates AI? A Guide to Federal Agencies

There's no single AI regulator in the United States. Instead, a patchwork of federal agencies โ€” each with its own mandate, tools, and limitations โ€” is scrambling to govern a technology that doesn't fit neatly into any existing box.

By The AI Lobby2026-04-2115 min read

At least eight federal agencies have some jurisdiction over AI, from the FTC's consumer protection authority to the FDA's medical device oversight. But no single agency is in charge โ€” creating gaps, overlaps, and a growing chorus of calls for a dedicated AI regulator.

If you build an AI system that screens job applicants, recommends medical treatments, trades stocks, drives cars, and generates marketing copy, how many federal agencies could potentially regulate you? The answer, as of 2026, is at least eight โ€” and possibly more, depending on the specifics. Welcome to the world of AI regulation in the United States, where authority is fragmented across dozens of agencies, none of which was designed to govern artificial intelligence, and all of which are racing to adapt their existing mandates to a technology that's evolving faster than any bureaucracy can keep up.

This guide maps the federal AI regulatory landscape โ€” who does what, where the gaps are, and why a growing number of voices are calling for something new.

Federal Trade Commission (FTC)

The FTC has emerged as the most aggressive federal regulator of AI, using its broad authority over "unfair or deceptive acts or practices" (Section 5 of the FTC Act) to police AI-related harms. The FTC doesn't need new legislation to act โ€” it has been creatively applying its existing consumer protection and competition mandates to AI cases since the early 2020s.

Key enforcement actions:

  • Rite Aid (December 2023): The FTC banned the drugstore chain from using facial recognition technology for five years after finding that Rite Aid's AI-powered surveillance system falsely identified customers as shoplifters, disproportionately targeting women and people of color. The FTC found that Rite Aid deployed the technology without reasonable safeguards and failed to prevent harm to consumers who were wrongly accused, confronted by employees, or even detained based on false matches.
  • Everalbum/Paravision (January 2021): The FTC required the photo app company to delete the facial recognition models it had trained on users' photos without adequate consent. This was a landmark action establishing that when data is collected deceptively, the FTC can require deletion of not just the data but the algorithms and models derived from it โ€” a remedy known as "algorithmic disgorgement."
  • Weight Watchers/Kurbo (2022): The FTC required the company to destroy AI models trained on children's data collected without parental consent under COPPA.

The FTC has also issued extensive guidance on AI claims, warning companies against "AI washing" โ€” making exaggerated or unsubstantiated claims about AI capabilities in marketing. In a series of blog posts and business guidance documents, the FTC has signaled it will pursue companies that claim their AI systems can do things they cannot, or that fail to disclose material limitations.

Under former Chair Lina Khan (2021-2025), the FTC proposed a comprehensive AI rulemaking under its Section 18 authority, which would have established binding rules for AI in consumer-facing applications. The proposal covered algorithmic discrimination, automated decision-making transparency, and data practices. The rulemaking's future is uncertain under new leadership, but the FTC's existing enforcement authority remains potent.

Limitations: The FTC's authority is broad but shallow. It can act against deceptive and unfair practices but cannot set detailed technical standards for AI systems. It has limited resources โ€” roughly 1,100 employees covering all consumer protection and competition matters, not just AI. And its enforcement is reactive, typically targeting companies after harm has occurred rather than setting proactive safety requirements.

Securities and Exchange Commission (SEC)

The SEC has focused on two AI-related areas: "AI washing" in investment marketing and the use of AI in trading and investment advice.

AI washing enforcement: In March 2024, the SEC fined two investment advisers โ€” Delphia Inc. and Global Predictions Inc. โ€” a combined $400,000 for making false and misleading statements about their use of AI. Delphia claimed it "put[s] collective data to work to make our artificial intelligence smarter so it can predict which companies and trends are about to make it big." Global Predictions claimed to be the "first regulated AI financial advisor." In both cases, the SEC found the claims were exaggerated or false โ€” the companies' actual use of AI was far more limited than their marketing suggested.

SEC Chair Gary Gensler (and his successors) have warned that AI washing could become as pervasive as "greenwashing" in the ESG space. The SEC has signaled it will continue pursuing enforcement actions against firms that overstate their AI capabilities to attract investors.

AI in trading: The SEC has also expressed concern about the use of AI and machine learning in algorithmic trading, robo-advisors, and market-making. A proposed rule (2023) would have required broker-dealers and investment advisers to identify and mitigate conflicts of interest in their use of "predictive data analytics" โ€” a term the SEC used as a proxy for AI/ML systems. The proposal drew fierce industry opposition and has not been finalized, but the SEC's interest in AI-driven market risks continues.

Limitations: The SEC's jurisdiction is limited to securities markets and investment activities. AI systems used outside the financial sector fall outside its purview. The agency also lacks the technical AI expertise of some other agencies and has been cautious about setting technology-specific rules.

Food and Drug Administration (FDA)

The FDA is arguably the federal agency with the most mature framework for regulating AI, having overseen AI-powered medical devices for over a decade. As of early 2026, the FDA has authorized more than 340 AI-enabled medical devices โ€” primarily through the 510(k) clearance pathway, which requires manufacturers to demonstrate that their device is substantially equivalent to an already-marketed device.

What the FDA regulates:

  • AI-powered diagnostic tools: Systems that analyze medical images (X-rays, MRIs, CT scans, retinal images) to detect conditions like cancer, diabetic retinopathy, and heart disease. These represent the majority of cleared AI medical devices.
  • Clinical decision support software: AI systems that help physicians make treatment decisions, flag drug interactions, or predict patient deterioration.
  • AI in drug discovery: While the FDA doesn't directly regulate the research process, AI-discovered drugs must still go through the standard clinical trial and approval process.

The FDA released its Action Plan for Artificial Intelligence in Medical Devices in 2021, outlining a framework for regulating AI systems that continue to learn and change after deployment โ€” so-called "continuously learning" algorithms. Traditional medical device regulation assumes a product is fixed at the time of clearance; AI systems that update their models based on new data challenge this assumption fundamentally.

The FDA has proposed a "predetermined change control plan" framework, where manufacturers describe in advance how their AI system will change over time and what guardrails will ensure continued safety and effectiveness. This approach โ€” regulating the process of change rather than freezing the technology at a point in time โ€” has been cited as a model for other agencies dealing with adaptive AI systems.

Limitations: The FDA's jurisdiction covers medical devices and drugs but not the broader healthcare AI applications that Anthropic and others are targeting โ€” AI chatbots for patient communication, administrative automation, insurance processing. These applications may not meet the legal definition of a "medical device," creating a gap in oversight for AI systems that influence healthcare decisions without directly diagnosing or treating patients.

National Highway Traffic Safety Administration (NHTSA)

The NHTSA oversees vehicle safety, putting it at the center of the autonomous vehicle debate. Its AI regulatory role has expanded dramatically as companies like Tesla, Waymo, GM Cruise, and others deploy increasingly autonomous driving systems.

Key regulatory actions:

  • Tesla Autopilot/FSD investigations: NHTSA has opened multiple investigations into Tesla's Autopilot and Full Self-Driving (FSD) systems following crashes, including fatal ones. In December 2023, Tesla recalled over 2 million vehicles for a software update to its Autopilot system after NHTSA found the system's driver monitoring was insufficient to prevent misuse. Additional investigations into FSD beta crashes are ongoing.
  • GM Cruise: Following a October 2023 incident in San Francisco where a Cruise robotaxi dragged a pedestrian, NHTSA launched an investigation that led to Cruise's voluntary suspension of driverless operations. The California DMV revoked Cruise's driverless testing permit, and GM subsequently scaled back the program significantly, cutting hundreds of jobs. NHTSA issued a recall of 950 Cruise vehicles in 2024.
  • Standing General Order (SGO): Since 2021, NHTSA has required manufacturers of vehicles with automated driving systems to report crashes within prescribed timeframes. This reporting requirement โ€” covering companies from Tesla to Waymo to small AV startups โ€” has created the first comprehensive federal dataset on autonomous vehicle safety performance.

NHTSA faces a fundamental challenge: its regulatory authority is built around a framework that assumes a human driver. Federal Motor Vehicle Safety Standards (FMVSS) reference steering wheels, brake pedals, and driver visibility โ€” concepts that become awkward or meaningless for fully autonomous vehicles. The agency has been slow to update these standards, creating regulatory uncertainty that companies cite as an obstacle to deployment.

Limitations: NHTSA can investigate and recall vehicles but has been criticized for lacking the resources and technical expertise to rigorously evaluate complex AI driving systems. The agency's enforcement budget and staffing have not kept pace with the rapid growth of autonomous vehicle technology.

Equal Employment Opportunity Commission (EEOC)

The EEOC has focused on AI's potential to perpetuate or amplify employment discrimination. Under existing civil rights law โ€” Title VII of the Civil Rights Act, the Americans with Disabilities Act, and the Age Discrimination in Employment Act โ€” employers are liable for discrimination in hiring, promotion, and termination, regardless of whether the discrimination is caused by human bias or algorithmic bias.

In May 2023, the EEOC issued detailed guidance on AI and algorithmic fairness in hiring, clarifying that:

  • Employers who use AI hiring tools are responsible for ensuring those tools don't discriminate, even if the tools are provided by third-party vendors
  • An AI system that disproportionately screens out candidates of a particular race, sex, age, or disability status can violate federal anti-discrimination law โ€” even if the system doesn't explicitly consider those characteristics
  • The "four-fifths rule" โ€” a long-standing EEOC guideline for evaluating disparate impact โ€” applies to AI-driven selection procedures
  • Employers must provide reasonable accommodations for applicants with disabilities who cannot effectively interact with AI hiring systems (e.g., video interview analysis tools that disadvantage deaf applicants)

The EEOC has also pursued enforcement actions. In August 2023, the agency settled with iTutorGroup for $365,000 after alleging the company's AI-powered recruiting software automatically rejected applicants over certain ages โ€” a violation of the ADEA. While the settlement amount was modest, the case established that the EEOC would pursue algorithmic discrimination claims under existing law.

Limitations: The EEOC is one of the most resource-constrained federal agencies, with a backlog of over 70,000 pending charges. Its ability to proactively investigate AI hiring tools โ€” as opposed to responding to individual complaints โ€” is severely limited. The agency also lacks the technical expertise to conduct independent audits of AI hiring systems.

National Institute of Standards and Technology (NIST)

NIST occupies a unique role in AI governance: it doesn't regulate, but it sets the standards and frameworks that regulators and industry rely on. NIST's approach to AI has been influential both domestically and internationally.

The AI Risk Management Framework (AI RMF): Released in January 2023, the AI RMF is a voluntary framework that helps organizations identify, assess, and manage risks associated with AI systems. The framework is organized around four functions โ€” Govern, Map, Measure, Manage โ€” and provides detailed guidance for each. While non-binding, the AI RMF has been widely adopted by industry and referenced by other agencies and international standards bodies.

The US AI Safety Institute (AISI): Established within NIST in November 2023 following the Biden administration's Executive Order on AI, the AISI is tasked with developing standards and tools for AI safety evaluation. The institute has conducted pre-release safety evaluations of frontier models from OpenAI, Anthropic, and Google โ€” making it the closest thing the US has to a centralized AI safety authority. However, the AISI's participation is voluntary; it has no power to prevent a company from releasing a model it deems unsafe.

NIST's AI work has also included standards for AI bias and fairness testing, guidelines for AI transparency and explainability, and technical standards for AI system evaluation. These standards inform procurement requirements (especially for federal agencies), industry best practices, and international harmonization efforts.

Limitations: NIST's framework is voluntary. No company is required to adopt the AI RMF or cooperate with the AISI. NIST's influence depends on industry goodwill and the decisions of other agencies to incorporate its standards into binding requirements.

Department of Defense (DOD)

The DOD is simultaneously one of the largest customers for AI technology and a regulator of AI in military applications. Its AI governance has evolved rapidly:

  • Project Maven (2017): The DOD's first major AI initiative, using machine learning to analyze drone surveillance footage. Project Maven became controversial when Google employees protested the company's involvement, leading Google to withdraw from the project and publish AI ethics principles. The controversy catalyzed the broader debate about military AI.
  • Joint Artificial Intelligence Center (JAIC) โ†’ Chief Digital and Artificial Intelligence Office (CDAO): The DOD's AI governance has been consolidated under the CDAO, established in 2022 to centralize data and AI strategy. The CDAO oversees AI adoption across the military, including procurement standards, ethical guidelines, and operational deployment.
  • AI Ethics Principles: The DOD adopted AI ethics principles in 2020, requiring that AI systems be responsible, equitable, traceable, reliable, and governable. These principles are integrated into acquisition policy, though critics argue implementation has been inconsistent.
  • Autonomous Weapons Policy: DOD Directive 3000.09 requires that autonomous and semi-autonomous weapon systems be designed to allow "appropriate levels of human judgment" over the use of force. The directive has been updated to address advances in AI capabilities, but the fundamental question โ€” under what circumstances, if any, can an AI system make lethal decisions without human approval โ€” remains hotly debated in military, legal, and ethical circles.
  • Defense AI Strategy: The DOD's 2023 Data, Analytics, and AI Adoption Strategy calls for accelerating AI deployment across the enterprise while maintaining ethical guardrails. The strategy emphasizes AI for logistics, intelligence analysis, predictive maintenance, and decision support โ€” not just weapons systems.

Limitations: The DOD's AI governance is focused on military applications and procurement. It has no authority over civilian AI uses and limited influence over the commercial AI market beyond its role as a major customer.

The Overlap Problem

Perhaps the most significant challenge in federal AI regulation is jurisdictional overlap. Consider a concrete example: a company develops an AI system that analyzes video interviews to assess job candidates. This single product could potentially face scrutiny from:

  • The FTC, if the company makes deceptive claims about the system's accuracy or fails to disclose material limitations
  • The EEOC, if the system disproportionately screens out candidates based on protected characteristics (race, sex, age, disability)
  • State attorneys general, under state consumer protection laws and state-specific AI hiring regulations (like New York City's Local Law 144, which requires bias audits of automated hiring tools)
  • The DOJ, if the system violates federal civil rights laws
  • The CFPB, if the hiring assessment is used in a financial services context and affects access to credit or employment in financial institutions

This overlapping jurisdiction creates real problems. Companies face compliance uncertainty โ€” they may satisfy one agency's requirements while unknowingly violating another's. Enforcement can be duplicative or, conversely, agencies may each assume another is handling the issue, resulting in gaps. Small companies and startups, which lack the legal resources to navigate multiple regulatory regimes simultaneously, are disproportionately burdened.

The Gap Problem

Even more concerning than overlap are the gaps โ€” areas where no federal agency has clear authority:

  • Deepfakes and synthetic media: No federal agency has clear jurisdiction over non-election-related deepfakes. The FTC can act if deepfakes are used in fraud or deceptive commercial practices, but political deepfakes, harassment deepfakes, and non-commercial synthetic media fall into a regulatory void.
  • General-purpose AI systems: ChatGPT, Claude, Gemini, and other general-purpose AI systems don't fit neatly into any existing regulatory category. They're not medical devices (FDA), not vehicles (NHTSA), not financial products (SEC/CFPB), and not employment tools (EEOC) โ€” unless they're used for those specific purposes. The platform itself exists in a regulatory no-man's-land.
  • AI safety and alignment: No federal agency is tasked with evaluating whether frontier AI models are safe to deploy in the broad sense โ€” whether they might be used to generate bioweapon instructions, assist with cyberattacks, or exhibit emergent behaviors that pose societal risks. NIST's AI Safety Institute does related work, but on a voluntary basis with no enforcement authority.
  • Algorithmic transparency: No federal law requires companies to explain how their AI systems make decisions (outside of specific contexts like credit decisions under the Equal Credit Opportunity Act). The EU AI Act's transparency requirements have no US federal equivalent.

Proposals for Reform

The fragmentation of federal AI oversight has generated numerous reform proposals:

National AI Commission (H.R. 3369): Proposed bipartisan legislation that would establish a blue-ribbon commission to study the AI regulatory landscape and recommend a comprehensive governance framework. The commission would include representatives from government, industry, academia, and civil society. Supporters argue this is a necessary first step before creating new regulatory authority; critics say it's a delay tactic that substitutes study for action.

"FDA for AI" proposals: Several prominent voices โ€” including former Google CEO Eric Schmidt and various academic researchers โ€” have called for a new federal agency dedicated to AI regulation, modeled on the FDA's approach to drugs and medical devices. This agency would have pre-market approval authority for high-risk AI systems, post-deployment monitoring capabilities, and the technical expertise to evaluate complex AI models. Critics argue that a new agency would take years to establish, would be captured by industry, and would stifle innovation with bureaucratic approval processes.

Expanded NIST authority: Some proposals would give NIST's AI Safety Institute binding authority โ€” transforming it from a voluntary standards body into a regulator that could require safety testing before model deployment and impose conditions on release. This would leverage NIST's existing technical expertise without creating an entirely new agency.

Agency coordination: Executive Order 14110 (October 2023) attempted to coordinate federal AI efforts by directing agencies to use their existing authorities to address AI risks and by establishing inter-agency coordination mechanisms. The EO represents the most comprehensive federal AI action to date, but it relies on agencies' existing statutory authority โ€” it cannot create new regulatory powers without legislation.

The Bottom Line

The United States has chosen โ€” through inaction more than design โ€” a sectoral approach to AI regulation, where existing agencies apply existing authorities to AI within their respective domains. This approach has the advantage of leveraging domain expertise (the FDA knows medicine, NHTSA knows vehicles) and avoiding a potentially slow and politicized process of creating a new regulatory body.

But the sectoral approach also means that AI risks that don't map to existing agency mandates go unaddressed. As AI systems become more general-purpose and more deeply embedded in society, the gaps in the current framework grow more significant. The question is whether Congress will act to fill those gaps โ€” through a new agency, expanded existing authority, or comprehensive legislation โ€” before a major AI-related crisis forces its hand.

We're tracking every federal agency action, proposed bill, and reform proposal on our Bill Tracker. As the regulatory landscape evolves, understanding who has authority โ€” and who doesn't โ€” is essential for anyone following AI policy.