Talan.tech
AI Risk Check/Healthcare

AI Risks in Healthcare

Court cases, HIPAA breaches, FDA actions, and clinical safety incidents — scored from public records.

6services

Industry overview

Healthcare is the highest-stakes domain for AI deployment. A misdiagnosis, a leaked record, or a hallucinated dosage is not a refund-and-apologize event — it is a regulatory action, a malpractice suit, or a patient harmed. Generative models that cite non-existent studies, ambient-scribe tools that invent symptoms, and triage agents that systematically downgrade certain patient groups have all surfaced in the last eighteen months. The gap between vendor marketing and clinical performance is wider here than in any other vertical, and the cost of closing it falls on the provider, not the model.

Key risks for Healthcare

PHI exposure and HIPAA breaches

Many AI vendors process protected health information through third-party LLM APIs without proper Business Associate Agreements, default-off retention controls, or audit logging. A single misconfigured integration can expose tens of thousands of records and trigger a multi-million-dollar OCR settlement.

Clinical hallucination and fabricated evidence

LLM-generated clinical notes have been shown to invent symptoms, dosages, and patient quotes that were never said. Ambient-scribe products have produced documentation that would not pass a chart review. Liability for what enters the medical record sits with the clinician, not the vendor.

FDA scope creep

Tools that began as "documentation aids" or "decision support" can drift into territory that meets the FDA Software-as-a-Medical-Device threshold. Deploying an unauthorized SaMD function — even in a pilot — exposes the institution to enforcement, not just the vendor.

Bias amplification in triage and diagnosis

AI tools trained on historical claims data can reproduce and amplify the disparities baked into that data — under-triaging Black patients, mis-scoring symptoms in women, ignoring rare-disease presentations. These failures are increasingly the subject of civil-rights enforcement, not just academic critique.

Regulatory surface

Active enforcement surfaces include HIPAA / HITECH (HHS-OCR), FDA SaMD pathways, FTC Section 5 unfairness, EU AI Act high-risk categorization, and state-level AI-in-healthcare statutes (Colorado, California, New York).

Buyer checklist

  • 1

    Signed BAA covering every subprocessor (the LLM provider, the embedding service, the storage tier).

  • 2

    PHI retention and training opt-out documented in the contract, not the marketing site.

  • 3

    Clear classification: is the function non-device, decision support, or SaMD? If unclear, treat as SaMD.

  • 4

    Bias evaluation across the institution's actual patient mix, not the vendor's benchmark dataset.

  • 5

    Incident-response SLA: who notifies whom, on what clock, when the model produces a clinically harmful output?

Frequently asked

Is using ChatGPT for clinical documentation a HIPAA violation?

Using a consumer ChatGPT account for documentation that contains PHI is, in nearly all cases, a HIPAA violation — there is no BAA, no audit log, and prompts may be retained for training. Enterprise tiers with signed BAAs and zero-retention modes can be compliant when configured correctly, but the burden is on the covered entity to verify and document that configuration.

Does the FDA regulate AI-powered clinical documentation tools?

The FDA regulates clinical decision support and Software-as-a-Medical-Device. Pure documentation tools generally fall outside that perimeter, but the line is functional, not nominal. A tool marketed as "documentation" that surfaces diagnostic suggestions or flags abnormal values may meet the SaMD definition and require clearance.

Get alerts when Healthcare risk scores change.

Court cases, breaches, and regulatory actions — pushed to you when they affect this industry.