AI-Generated Deepfake X-Rays Deceive Radiologists and AI Systems

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A multi-center study found that radiologists and advanced AI models cannot reliably distinguish AI-generated deepfake X-ray images from authentic ones. This vulnerability exposes healthcare to risks such as misdiagnosis, fraudulent litigation, and cybersecurity threats, highlighting the urgent need for improved detection tools and training.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves AI systems explicitly (ChatGPT, RoentGen, and other LLMs) generating synthetic medical images that are indistinguishable from real ones by experts and AI detectors. This misuse or potential malicious use of AI-generated deepfakes can directly lead to harms such as fraudulent litigation, clinical misdiagnosis, and cybersecurity attacks causing clinical chaos, all of which are harms to health and communities. The study documents these risks with evidence of actual AI-generated images deceiving professionals, thus meeting the criteria for an AI Incident rather than a mere hazard or complementary information.[AI generated]
AI principles
SafetyRobustness & digital security

Industries
Healthcare, drugs, and biotechnologyDigital security

Affected stakeholders
ConsumersWorkers

Harm types
Physical (injury)Economic/Property

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

These medical X-rays are all deepfakes -- and they fool even radiologists

2026-03-24
Nature
Why's our monitor labelling this an incident or hazard?
The article involves AI systems generating synthetic medical images and discusses the plausible risks these images pose to research integrity, clinical workflows, and legal contexts. However, it does not report a concrete event where AI-generated images have directly or indirectly caused harm or legal violations. Instead, it focuses on raising awareness, training radiologists, and proposing mitigation strategies. Therefore, this is best classified as Complementary Information, as it provides important context and responses related to AI-generated medical images and their potential impacts, but does not describe a specific AI Incident or AI Hazard.
Thumbnail Image

Even Doctors Can't Tell These AI X-Rays Are Fake

2026-03-24
SciTechDaily
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems generating synthetic medical images (deepfake X-rays) and discusses the risks these pose to healthcare, including potential fraud and cybersecurity attacks that could disrupt medical diagnosis and records. While the study does not report actual incidents of harm, it clearly identifies credible and significant risks that could plausibly lead to injury, harm to health, or disruption of critical healthcare infrastructure. Therefore, this event fits the definition of an AI Hazard, as it describes circumstances where AI system use could plausibly lead to an AI Incident in the future. It is not an AI Incident because no actual harm has yet occurred, nor is it merely Complementary Information or Unrelated, since the focus is on the risk and vulnerability posed by AI-generated medical images.
Thumbnail Image

AI-Generated Medical Images Deceive Even Top Radiologists - Neuroscience News

2026-03-24
Neuroscience News
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (ChatGPT, RoentGen, and other LLMs) generating synthetic medical images that are indistinguishable from real ones by experts and AI detectors. This misuse or potential malicious use of AI-generated deepfakes can directly lead to harms such as fraudulent litigation, clinical misdiagnosis, and cybersecurity attacks causing clinical chaos, all of which are harms to health and communities. The study documents these risks with evidence of actual AI-generated images deceiving professionals, thus meeting the criteria for an AI Incident rather than a mere hazard or complementary information.
Thumbnail Image

Study finds deepfake X-rays indistinguishable from real ones

2026-03-24
WGN-TV
Why's our monitor labelling this an incident or hazard?
The article focuses on the potential for AI-generated deepfake X-rays to deceive radiologists and AI systems, which could plausibly lead to harm such as misdiagnosis or fraud in the future. Since no actual harm or incident is reported, and the main content is about raising awareness and discussing risks, this fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Deepfake X-rays can deceive radiologists and AI systems

2026-03-24
News-Medical.net
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating synthetic medical images (deepfake X-rays) that have directly led to a demonstrated inability of experts and AI systems to reliably distinguish fake from real images. This creates a direct risk of harm to health (a), as fraudulent or manipulated images could cause misdiagnosis or clinical chaos. The study's findings confirm that the AI system's use has already caused a vulnerability that constitutes harm, not just a potential hazard. Therefore, this qualifies as an AI Incident due to the realized harm and direct involvement of AI-generated content in medical imaging deception.
Thumbnail Image

Deepfake X-Rays Sneak Past Radiologists and AI, Underscoring Abuse Potential

2026-03-24
MedPage Today
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating synthetic medical images (deepfakes) that have directly led to a realized harm: the inability of radiologists and AI detection tools to reliably identify fake x-rays. This creates a direct risk of harm to health (a), harm to medical research integrity (d), and potential violations of rights through malicious use. The article documents actual study results demonstrating these harms and discusses real-world implications and risks, not just theoretical concerns. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to significant harms or risks of harm that are materialized and documented.
Thumbnail Image

Deepfake X-rays Deceive Radiologists, AI

2026-03-24
Mirage News
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating synthetic medical images (deepfake X-rays) that have directly led to a demonstrated inability of experts and AI models to reliably distinguish fake from real images. This creates a direct risk of harm to patient health and medical integrity (harm to persons and communities), as well as potential cybersecurity risks disrupting healthcare operations. The study documents realized vulnerabilities and potential harms, thus qualifying as an AI Incident. The article also discusses mitigation strategies, but the primary focus is on the harm and risks demonstrated by the AI-generated deepfakes.
Thumbnail Image

Can you spot a 'deepfake' x-ray?

2026-03-24
AuntMinnie
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating synthetic medical images that could be maliciously used to cause harm to patients and healthcare operations, fitting the definition of an AI Hazard because it plausibly could lead to an AI Incident (harm to health and disruption of critical infrastructure). No actual harm has been reported yet, so it is not an AI Incident. The article focuses on the risk and need for detection tools, making it an AI Hazard rather than Complementary Information or Unrelated.