Google's Med-Gemini AI Hallucinates Nonexistent Brain Structure in Medical Paper

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Google's Med-Gemini healthcare AI generated a hallucinated term, 'basilar ganglia,' conflating two distinct brain structures in a published research paper. The error, initially unnoticed by authors and reviewers, highlights risks of AI-generated medical misinformation and the potential for harm if such mistakes go undetected in clinical settings.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves an AI system explicitly named (Google's Med-Gemini) used in healthcare for interpreting medical scans and generating reports. The AI system produced a hallucinated term "basilar ganglia," which is medically incorrect and could lead to misdiagnosis and inappropriate treatment, posing a direct risk of harm to patients' health. The article documents that this error was not caught initially and remains in the research paper, indicating a malfunction in the AI system's output. Medical experts express concern about the dangers of such hallucinations and the risk of automation bias leading to missed errors by clinicians relying on AI. The harm is related to injury or harm to health (definition a), and the AI system's malfunction is a contributing factor. Hence, this event meets the criteria for an AI Incident rather than a hazard or complementary information.[AI generated]
AI principles
AccountabilityRobustness & digital securitySafetyTransparency & explainabilityHuman wellbeing

Industries
Healthcare, drugs, and biotechnology

Affected stakeholders
Consumers

Harm types
Physical (injury)

Severity
AI incident

Business function:
Research and development

AI system task:
Content generation

In other databases

Articles about this incident or hazard

Thumbnail Image

Google's healthcare AI made up a body part -- what happens when doctors don't notice?

2025-08-04
The Verge
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly named (Google's Med-Gemini) used in healthcare for interpreting medical scans and generating reports. The AI system produced a hallucinated term "basilar ganglia," which is medically incorrect and could lead to misdiagnosis and inappropriate treatment, posing a direct risk of harm to patients' health. The article documents that this error was not caught initially and remains in the research paper, indicating a malfunction in the AI system's output. Medical experts express concern about the dangers of such hallucinations and the risk of automation bias leading to missed errors by clinicians relying on AI. The harm is related to injury or harm to health (definition a), and the AI system's malfunction is a contributing factor. Hence, this event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Report: Google's Medical AI Hallucinated a Nonexistent Part of the Brain

2025-08-05
InsideHook
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Google's Med-Gemini) used in a medical context that produced a hallucinated, incorrect medical term. This is a malfunction of the AI system's output. While no direct harm has been reported, the hallucination could plausibly lead to harm if medical decisions were based on such incorrect information. Therefore, this qualifies as an AI Hazard because it plausibly could lead to injury or harm to health, but there is no evidence of realized harm yet. The event is not merely general AI news or a complementary update, as it highlights a specific AI system's erroneous output with potential health implications.
Thumbnail Image

When Google's healthcare AI made up a body part - Becker's Hospital Review | Healthcare News & Analysis

2025-08-05
Hospital Review
Why's our monitor labelling this an incident or hazard?
The AI system's use led to a factual error in medical terminology, which could plausibly lead to harm if relied upon in clinical settings. However, the error was caught and corrected before any harm was reported or occurred. Therefore, this event represents a potential risk or hazard rather than an incident with realized harm. It fits the definition of an AI Hazard because the AI's malfunction (incorrect terminology) could plausibly lead to harm in healthcare contexts if not addressed.
Thumbnail Image

Google AI flags disease in brain part that doesn't exist

2025-08-05
NewsBytes
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved (Med-Gemini), and it produced incorrect information (hallucination) about brain anatomy. This misidentification could plausibly lead to harm in medical decision-making or patient care if acted upon, fulfilling the criteria for an AI Hazard. Since no actual injury or harm has been reported, and the issue was corrected quietly without evidence of realized harm, this event is best classified as an AI Hazard rather than an AI Incident.
Thumbnail Image

Google Med-Gemini AI Paper Invents 'Basilar Ganglia' in Major Error

2025-08-04
WebProNews
Why's our monitor labelling this an incident or hazard?
The AI system (Med-Gemini) is explicitly involved as it generated false medical information (hallucination). The error was published and unnoticed, indicating a failure in oversight during AI use and deployment. The misinformation could indirectly harm patients if relied upon for diagnosis or treatment, fulfilling the criteria for harm to health (a) indirectly caused by the AI system's malfunction or misuse. The event is not merely a potential risk but a realized error with implications for health and safety, thus classifying it as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Doctors Horrified After Google's Healthcare AI Makes Up a Body Part That Does Not Exist in Humans

2025-08-06
Futurism
Why's our monitor labelling this an incident or hazard?
The event involves an AI system used in healthcare that produced false medical information, a hallucination, which is a malfunction of the AI system. While no direct harm to patients is reported, the article highlights the plausible risk that such errors could lead to harm in clinical practice, including misdiagnosis or inappropriate treatment. This fits the definition of an AI Incident because the AI's malfunction has directly led to a significant risk of injury or harm to health, and the error was present in official research outputs, indicating a failure in development and use. The concern expressed by medical experts about the dangers of such hallucinations in clinical settings further supports classification as an AI Incident rather than a mere hazard or complementary information.
Thumbnail Image

Google's Med-Gemini AI Hallucinates Fake Body Part in Research Paper

2025-08-07
WebProNews
Why's our monitor labelling this an incident or hazard?
The Med-Gemini AI is explicitly described as an AI system analyzing medical data and generating diagnostic information. The hallucination of a fake body part in a research paper is a malfunction of the AI system's output. This error, if relied upon in clinical decision-making, could indirectly cause harm to patients' health, fulfilling the criteria for an AI Incident. The event reports a realized error with potential health consequences, not merely a hypothetical risk, and thus qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Šokantno otkriće i sprdnja zbog Google AI-a: Izmislio deo tela koji ne postoji!

2025-08-08
kurir.rs
Why's our monitor labelling this an incident or hazard?
The article explicitly involves a Google health AI system that produced a fabricated anatomical term, which is a malfunction of the AI system. This error could directly cause harm to patients through misdiagnosis or inappropriate treatment, fulfilling the criterion of injury or harm to health. The involvement of the AI system is clear, and the harm is direct or imminent. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Mačje sapunice i bebe zarobljene u svemiru: "AI smeće" preuzima YouTube

2025-08-12
vijesti.ba
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating video content, which is clearly AI involvement. However, the article does not report any realized harm such as physical injury, rights violations, or significant community harm caused by these AI videos. Instead, it discusses the platform's policy responses and societal concerns about the quality and impact of AI-generated content. Therefore, this is best classified as Complementary Information, as it provides context and updates on the ecosystem and governance responses rather than describing a specific AI Incident or AI Hazard.
Thumbnail Image

Kako da razlikujete stvarni sadržaj od vještačkog? Prsti, oči i sjenke otkrivaju detalje

2025-08-11
RTCG - Radio Televizija Crne Gore - Nacionalni javni servis
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems in the generation of fake images and videos that can mislead users, which constitutes harm to communities through misinformation and deception. However, the article does not report a specific incident where harm has already occurred; rather, it discusses the ongoing presence and risks of AI-generated content and the tools and strategies to detect and mitigate such risks. Therefore, it describes a plausible risk of harm from AI-generated misinformation and disinformation, fitting the definition of an AI Hazard. It also includes complementary information about detection tools and societal responses, but the primary focus is on the potential for harm from AI-generated content.
Thumbnail Image

Bivši Googleov direktor upozorava: AI distopija stiže već 20

2025-08-11
Aktuelno
Why's our monitor labelling this an incident or hazard?
The article is a forward-looking opinion and warning from a former Google executive about the risks of AI leading to a dystopian future. It highlights potential harms such as AI-enabled fraud, surveillance, and autonomous weapons, but these are discussed as risks rather than realized incidents. There is no description of a concrete AI Incident or a specific AI Hazard event occurring now. The focus is on raising awareness and advocating for regulation, which fits the definition of Complementary Information as it provides context and governance response considerations without reporting a new incident or hazard.
Thumbnail Image

Da li AI može da zameni stvarne prijatelje? Sve više usamljenih tinejdžera traži utehu od AI, a evo i zašto

2025-08-12
Ona.rs
Why's our monitor labelling this an incident or hazard?
The article primarily provides research insights and expert warnings about the potential negative effects of AI used as digital friends by teenagers. It does not describe a concrete event or circumstance where AI use has already caused harm, nor does it report a near miss or credible imminent risk event. Therefore, it does not meet the criteria for an AI Incident or AI Hazard. It is best classified as Complementary Information because it offers contextual understanding and societal response considerations regarding AI's role in youth mental health and social behavior.