Analysis/Healthcare AI: The Regulation Nobody's Talking About
HealthcareFederalAnalysis

Healthcare AI: The Regulation Nobody's Talking About

340+ FDA-cleared AI devices, algorithms deciding insurance claims, and a regulatory framework built for stethoscopes โ€” not machine learning

By The AI Lobby2026-04-2114 min read

AI is already making life-and-death healthcare decisions โ€” from reading radiology scans to approving insurance claims โ€” but the regulatory framework hasn't caught up. With 340+ FDA-cleared AI devices and minimal post-market surveillance, the gap between deployment and oversight is growing.

While Congress debates chatbot safety and state legislators wrestle with algorithmic discrimination, artificial intelligence is quietly transforming the one sector where the stakes are literally life and death: healthcare. AI systems are already reading radiology scans, flagging pathology slides, predicting patient deterioration, recommending treatment plans, processing insurance claims, and determining prior authorization decisions. Some of these systems have FDA clearance. Many do not. And the regulatory framework governing all of them was designed for an era of stethoscopes and X-ray machines โ€” not machine learning models that update themselves after deployment.

The numbers illustrate the scale. The FDA has cleared over 340 AI-enabled medical devices through its 510(k) and De Novo pathways, with the pace accelerating: more devices were cleared in 2025 alone than in all years prior to 2020 combined. The vast majority โ€” over 75% โ€” are in radiology, where AI assists in detecting conditions like breast cancer, lung nodules, and stroke. Cardiology and pathology are the next largest categories. These devices range from simple image enhancement tools to sophisticated diagnostic systems that can identify cancers invisible to the human eye.

But FDA-cleared devices are only part of the story โ€” and arguably the less concerning part. A growing universe of AI applications in healthcare operates outside the FDA's traditional jurisdiction: clinical decision support tools that "assist" rather than "replace" physician judgment, administrative AI that processes insurance claims and prior authorization requests, and general-purpose AI models like ChatGPT and Claude that physicians increasingly use informally for clinical questions. The regulatory gap between what AI does in healthcare and what regulators oversee is vast and growing.

The FDA's Square-Peg Problem

The FDA's framework for regulating medical devices was designed for physical products โ€” devices with fixed functionality that can be tested before market release and monitored afterward. The 510(k) pathway, which accounts for most AI device clearances, requires manufacturers to demonstrate that their product is "substantially equivalent" to a legally marketed device. The De Novo pathway is for novel devices without a clear predicate.

Both pathways have a fundamental problem with AI: they evaluate a static product at a point in time, but many AI models are designed to learn and change after deployment. A radiology AI cleared by the FDA in January may behave differently by June if it has been retrained on new data. The FDA recognized this issue and in 2021 proposed a "Predetermined Change Control Plan" framework that would allow manufacturers to describe anticipated modifications upfront and make them without new submissions. But the framework is still evolving, and its practical implementation has been limited.

The bigger gap is what the FDA doesn't regulate at all. Clinical decision support (CDS) software is largely exempt from FDA oversight under the 21st Century Cures Act, as long as it's intended to support (not replace) a healthcare professional's judgment, the professional can independently review the basis for the recommendation, and it doesn't acquire or process medical images or signals. This exemption was designed for simple rule-based systems โ€” "alert the doctor if the patient's potassium is above 6.0" โ€” but it's increasingly being applied to sophisticated AI systems that generate complex recommendations from vast datasets. The line between "supporting" and "replacing" physician judgment is blurry at best.

General-purpose AI models used in healthcare present yet another regulatory challenge. When a physician uses Claude or GPT-4 to help interpret lab results or draft differential diagnoses โ€” something that surveys suggest is already happening at significant scale โ€” that use is entirely outside the FDA's regulatory framework. The models aren't marketed as medical devices, the companies don't claim clinical applications (even as they develop them), and no premarket review occurs.

The Insurance Algorithm Scandal

If FDA-regulated diagnostic AI is the visible tip of the healthcare AI iceberg, insurance algorithms are the vast mass below the waterline โ€” less visible, less regulated, and potentially more consequential for patients.

The most prominent case involves UnitedHealth Group and its nH Predict algorithm. A class action lawsuit filed in November 2023 alleged that UnitedHealth used the nH Predict AI to systematically deny post-acute care coverage for elderly patients on Medicare Advantage plans. The lawsuit claimed the algorithm had a known error rate of approximately 90% โ€” meaning it incorrectly denied coverage the vast majority of the time โ€” but UnitedHealth continued using it because the denials saved money and most patients didn't appeal. Internal documents cited in the lawsuit suggested that UnitedHealth employees were instructed to follow the algorithm's recommendations even when they disagreed with the clinical assessment.

The nH Predict case illuminated a broader problem: AI-driven prior authorization and claims processing across the health insurance industry. Insurers including Cigna, Humana, and Aetna use algorithmic systems to process claims, authorize treatments, and determine coverage. These systems can review thousands of cases per hour โ€” far more efficiently than human reviewers โ€” but their decision-making processes are opaque to patients, physicians, and regulators. When an algorithm denies a claim, patients often receive a generic denial letter with no indication that an AI made the decision.

CMS (the Centers for Medicare & Medicaid Services) has regulatory authority over Medicare Advantage plans and Medicaid managed care, but its oversight of algorithmic decision-making has been minimal. In early 2025, CMS issued guidance reminding insurers that coverage decisions must be based on individual clinical assessments, not solely on algorithmic predictions โ€” but enforcement has been limited, and the guidance lacks the force of law.

State insurance regulators have been more active. Colorado's Division of Insurance issued a bulletin requiring insurers to disclose the use of AI in claims decisions. New York's Department of Financial Services proposed regulations requiring bias testing for insurance AI. But state-by-state regulation of national insurance companies creates the same "patchwork" problem that exists in other AI regulatory domains.

Anthropic's Healthcare Push

Against this regulatory backdrop, Anthropic has made healthcare a central focus of its commercial and lobbying strategy. The company's $1.6 million in Q1 2026 lobbying โ€” its highest ever โ€” was driven significantly by healthcare-related advocacy. Lobbying disclosures reference the Healthcare AI Accountability Act (S. 4178), healthcare procurement policy, and FDA regulatory frameworks for AI.

Anthropic has launched Claude for Healthcare, positioning its AI model for clinical decision support, medical research, and administrative automation. The company has partnered with health systems and is pursuing integration with electronic health record (EHR) platforms. Anthropic's pitch is that Claude's "Constitutional AI" safety training makes it better suited for high-stakes medical applications than less safety-focused competitors.

The strategic logic is clear: healthcare is an enormous market (nearly $5 trillion annually in the U.S.), AI's value proposition in healthcare is compelling (reducing errors, improving efficiency, expanding access), and first-movers who shape the regulatory framework will have a lasting advantage. By lobbying on the Healthcare AI Accountability Act while simultaneously deploying Claude for Healthcare, Anthropic is trying to write the rules and win the market simultaneously.

This isn't unique to Anthropic. Microsoft acquired Nuance Communications for $19.7 billion in 2022, gaining DAX (Dragon Ambient eXperience) โ€” an AI-powered clinical documentation tool used by hundreds of thousands of physicians. Microsoft has integrated Nuance/DAX with its Azure cloud platform and is adding GPT-powered features, creating a healthcare AI pipeline from clinical documentation to decision support.

Epic Systems, the dominant electronic health record company (used by over 250 million patients in the U.S.), has partnered with Microsoft to integrate AI capabilities directly into its EHR platform. When your doctor uses Epic, AI features are increasingly embedded in the workflow โ€” summarizing patient histories, suggesting orders, and drafting clinical notes. Epic's market dominance means these AI features will reach physicians at scale, whether regulators are ready or not.

Google Health has invested heavily in medical AI research, including dermatology AI, pathology AI, and its Med-PaLM medical language model. Google's approach has been more research-focused than commercial, but the company is increasingly seeking FDA clearances and clinical partnerships.

The Healthcare AI Accountability Act

The Healthcare AI Accountability Act (S. 4178), introduced in the Senate in early 2026, represents the most serious legislative attempt to address the healthcare AI regulatory gap. The bill would:

  • Require transparency when AI systems are used in clinical decisions, coverage determinations, or treatment recommendations โ€” patients and providers would have the right to know when an AI contributed to a healthcare decision affecting them.
  • Establish FDA authority over AI systems used in clinical decision support, closing the 21st Century Cures Act exemption for AI-powered CDS tools that function as de facto diagnostic systems.
  • Mandate post-market surveillance for FDA-cleared AI medical devices, requiring manufacturers to report performance data, adverse events, and algorithmic drift (changes in AI behavior over time).
  • Regulate insurance AI by requiring health insurers to disclose the use of algorithmic systems in coverage decisions, submit algorithms for review by CMS, and maintain human oversight of AI-driven denials.
  • Create reporting requirements for healthcare AI adverse events, similar to the existing framework for medical device adverse events but adapted for algorithmic systems.
  • Fund research on healthcare AI bias, safety, and effectiveness through a new program at the Agency for Healthcare Research and Quality (AHRQ).

The bill has attracted bipartisan interest but also intense lobbying. Healthcare AI companies and industry groups argue that overly prescriptive regulation could slow the adoption of beneficial AI tools. Insurers have lobbied against the algorithmic transparency provisions, arguing that disclosing their algorithms would reveal proprietary business information and enable gaming. The FDA has expressed support for expanded authority but requested additional resources to handle the workload.

Anthropic, notably, has lobbied in support of S. 4178 โ€” or at least key provisions of it. This is consistent with the company's strategy of supporting regulatory frameworks that validate its safety-focused approach and create barriers to entry for less safety-conscious competitors. If the Healthcare AI Accountability Act passes with strong safety requirements, Anthropic's Constitutional AI training becomes a competitive advantage rather than just a marketing talking point.

The Post-Market Surveillance Problem

Perhaps the most critical gap in healthcare AI regulation is what happens after a device reaches the market. The FDA's pre-market review process, while imperfect, at least evaluates AI devices before they're used on patients. Post-market surveillance โ€” monitoring devices after they're deployed โ€” is far weaker.

Of the 340+ FDA-cleared AI medical devices, the vast majority were cleared through the 510(k) pathway, which has the least stringent post-market requirements. Manufacturers are required to report adverse events, but there's no systematic monitoring of AI performance, no requirement to report algorithmic drift, and limited oversight of how devices perform across different patient populations.

This matters because AI medical devices can behave differently across populations. A radiology AI trained primarily on data from academic medical centers may perform poorly in rural hospitals serving different patient demographics. A pathology AI validated on one type of tissue preparation may give inaccurate results with a different lab's slides. These population-level performance variations won't show up in pre-market testing but can have serious consequences in clinical practice.

Several high-profile failures have illustrated the post-market surveillance gap. An AI system for detecting diabetic retinopathy โ€” one of the first autonomous AI diagnostic devices cleared by the FDA โ€” showed significantly worse performance in real-world deployment than in clinical trials, with higher rates of ungradable images and longer patient wait times. The failure was identified through academic research, not FDA surveillance.

A study published in Nature Medicine found that several FDA-cleared radiology AI devices showed performance degradation over time โ€” a form of algorithmic drift where the model's accuracy decreases as the real-world data it encounters diverges from its training data. Without systematic post-market monitoring, this degradation may go undetected until it causes patient harm.

Drug Discovery and the AlphaFold Revolution

Beyond clinical care and insurance, AI is transforming pharmaceutical research in ways that raise their own regulatory questions. AlphaFold, DeepMind's protein structure prediction system, has predicted the 3D structures of virtually all known proteins โ€” a scientific achievement of historic magnitude. Drug companies are using AlphaFold and similar AI tools to identify drug targets, design molecules, and predict clinical trial outcomes.

The regulatory question is whether AI-driven drug discovery requires new oversight mechanisms. Current FDA drug approval processes evaluate the end product (the drug) regardless of how it was discovered. But as AI tools become more central to drug design โ€” identifying candidates, predicting interactions, and even designing clinical trials โ€” regulators may need to evaluate the AI tools themselves, not just their outputs.

Pharmaceutical companies are also using AI for "real-world evidence" โ€” analyzing electronic health records, insurance claims, and other data to generate evidence about drug effectiveness and safety outside of traditional clinical trials. The FDA has been cautiously supportive of real-world evidence but has not established comprehensive standards for AI-generated evidence. The potential for biased data, confounded analyses, and p-hacking at algorithmic scale is significant.

What Needs to Happen

The healthcare AI regulatory gap is not a future problem โ€” it's a present one. AI systems are already making consequential healthcare decisions for millions of Americans, and the regulatory framework is years behind the technology. Several reforms are urgently needed:

  • Close the CDS exemption: The 21st Century Cures Act exemption for clinical decision support made sense for rule-based alerts. It doesn't make sense for AI systems that generate complex clinical recommendations from opaque models. The FDA needs clear authority over AI-powered CDS.
  • Build real post-market surveillance: The FDA needs a systematic framework for monitoring AI medical device performance after deployment, including mandatory performance reporting, algorithmic drift detection, and population-stratified analysis.
  • Regulate insurance AI: CMS and state insurance regulators need explicit authority and resources to oversee algorithmic claims processing and prior authorization. The nH Predict scandal showed what happens when insurance AI operates without oversight.
  • Require transparency: Patients deserve to know when AI contributes to their care or coverage decisions. This isn't just an ethical principle โ€” it's necessary for accountability and error correction.
  • Fund independent evaluation: The FDA, AHRQ, and NIH need resources to independently evaluate healthcare AI systems rather than relying on manufacturer-submitted data.

The Healthcare AI Accountability Act (S. 4178) addresses many of these needs, but its path through Congress is uncertain. In the meantime, healthcare AI continues to deploy faster than anyone can regulate it. The question isn't whether AI will transform healthcare โ€” it already has. The question is whether the transformation will be governed by evidence, accountability, and patient safety, or by the commercial priorities of the companies building the technology. Track healthcare AI policy developments on our federal tracker and company lobbying on our Follow the Money page.