Analysis/Who Regulates AI? A Guide to Federal Agencies
FederalGuide

Who Regulates AI? A Guide to Federal Agencies

NIST, FTC, FDA, DOD, SEC — mapping the fragmented landscape of federal AI oversight

By The AI Lobby2026-02-209 min read
Share on 𝕏
AI Overview

Seven federal agencies claim some authority over AI, but none has a comprehensive mandate. The result: 23 overlapping guidance documents and zero binding federal rules.

No single federal agency regulates AI in the United States. Instead, oversight is scattered across NIST, FTC, FDA, DOD, SEC, and others — creating gaps, overlaps, and a regulatory maze.

If you want to know who regulates artificial intelligence in the United States, the honest answer is: it's complicated. Unlike the EU, which created a comprehensive AI Act with clear regulatory authority, the U.S. has no single AI regulator. Instead, oversight is scattered across more than a dozen federal agencies, each with jurisdiction over specific sectors or practices. The result is a fragmented landscape with significant gaps, confusing overlaps, and a growing debate about whether the current system is adequate for a technology that touches every sector of the economy.

NIST (National Institute of Standards and Technology) has become the de facto standard-setter for AI safety and trustworthiness. The AI Risk Management Framework (RMF), released in January 2023 and updated in 2025, provides voluntary guidelines for identifying and managing AI risks. NIST also leads the U.S. AI Safety Institute, established under the Biden administration's Executive Order 14110, which conducts evaluations of frontier AI models. However, NIST has no enforcement authority — its frameworks are guidelines, not regulations. Companies can adopt them voluntarily, and many cite NIST compliance in their lobbying against mandatory state regulations.

The FTC (Federal Trade Commission) is the most active AI enforcer. Using its existing authority over unfair and deceptive practices, the FTC has brought enforcement actions against companies for AI-related harms including deceptive AI claims, algorithmic discrimination, and data privacy violations. The agency has signaled that it views existing consumer protection law as broadly applicable to AI and does not need new legislation to act. However, the FTC's case-by-case enforcement approach means there are no comprehensive AI rules, and the agency's authority is currently being challenged in courts.

The FDA (Food and Drug Administration) regulates AI as medical devices when the technology is used for clinical purposes. The agency has cleared over 700 AI-enabled medical devices through its 510(k) and De Novo pathways, primarily in radiology, cardiology, and pathology. The FDA's approach is device-specific — it evaluates individual AI products, not the underlying technology or training practices. Critics argue this leaves gaps in regulating continuous-learning AI systems that update after deployment. See our healthcare AI analysis for more on this regulatory gap.

The DOD (Department of Defense) governs military AI through Directive 3000.09 on autonomous weapons and the Responsible AI Strategy. The Chief Digital and AI Office (CDAO) oversees AI procurement and deployment across the military. Defense AI operates under a different regulatory philosophy than civilian AI — the emphasis is on capability and speed of deployment rather than consumer protection. See our military AI analysis for detailed coverage.

The SEC (Securities and Exchange Commission) has increasingly focused on AI in financial services, particularly algorithmic trading, robo-advisors, and AI-driven risk assessment. In 2025, the SEC proposed rules requiring broker-dealers and investment advisors to identify and mitigate conflicts of interest arising from AI-driven predictions and recommendations. The financial industry pushed back hard, arguing the proposed rules were overly broad.

Other agencies with AI-relevant authority include the EEOC (employment discrimination from AI hiring tools), the CFPB (AI in lending decisions), the DOT/NHTSA (autonomous vehicles), the FCC (AI in communications), and the Department of Education (AI in schools). Each operates within its existing statutory authority, applying legacy laws to new technology.

The fragmentation has prompted repeated calls for a dedicated AI agency or coordinator. The National AI Commission Act (H.R. 3369) would create a bipartisan commission to recommend a federal AI regulatory framework. Some experts advocate for an "FDA for AI" that would evaluate and approve high-risk AI systems before deployment. Others argue that sector-specific regulation is appropriate for a technology as diverse as AI. Track the federal debate in our federal policy tracker. What everyone agrees on is that the current patchwork is not working — the question is what replaces it.