AI Chatbots Spread Dangerous Medical Misinformation

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Recent studies published in The Lancet Digital Health and Nature Medicine reveal that AI chatbots, including popular large language models, frequently provide inaccurate and potentially harmful medical advice with high confidence. Experts warn these systems can mislead users, posing significant health risks due to their inability to reliably distinguish false information.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves AI systems explicitly (LLMs and AI chatbots) whose use has directly led to harm by disseminating false or misleading medical information that could endanger public health. The research shows that these AI systems fail to reliably distinguish false medical claims, sometimes agreeing with misinformation presented in a clinical style, which increases the risk of harm. Since the harm is realized (dangerous advice is being given and used by millions), this meets the criteria for an AI Incident involving injury or harm to health and harm to communities. The article does not merely warn of potential harm but documents actual problematic outputs and their implications.[AI generated]
AI principles
SafetyRobustness & digital security

Industries
Healthcare, drugs, and biotechnology

Affected stakeholders
Consumers

Harm types
Physical (injury)

Severity
AI incident

AI system task:
Interaction support/chatbots


Articles about this incident or hazard

Thumbnail Image

Cercetătorii avertizează: Chatboții AI pot oferi sfaturi medicale dezastruoase

2026-03-12
Stiri pe surse
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (LLMs and AI chatbots) whose use has directly led to harm by disseminating false or misleading medical information that could endanger public health. The research shows that these AI systems fail to reliably distinguish false medical claims, sometimes agreeing with misinformation presented in a clinical style, which increases the risk of harm. Since the harm is realized (dangerous advice is being given and used by millions), this meets the criteria for an AI Incident involving injury or harm to health and harm to communities. The article does not merely warn of potential harm but documents actual problematic outputs and their implications.
Thumbnail Image

Medicii trag un semnal de alarmă: ce sfaturi bizare oferă uneori AI pacienților - de la usturoi în anus la lapte rece pentru sângerări

2026-03-13
Stiri pe surse
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (chatbots powered by large language models) that are used by millions for medical advice. The AI systems produce false and potentially harmful health recommendations, which can directly harm users' health if acted upon. The harm is realized or highly plausible given the examples of dangerous advice. This fits the definition of an AI Incident as the AI's use has directly led to harm to people's health. The article does not merely warn about potential harm (hazard) nor is it a general update or unrelated news; it documents actual harmful outputs from AI systems.
Thumbnail Image

Ce recomandă Inteligența Artificială celor care cer sfaturi medicale: să-și bage usturoi în anus și să bea lapte rece

2026-03-12
Mediafax.ro
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI chatbots providing harmful medical advice that could injure users' health, fulfilling the criteria for harm to persons (a). The AI systems are clearly involved in the use phase, and the harm is direct or indirect through the dissemination of false medical information. The presence of AI systems is explicit (chatbots with large language models). The harm is realized, not just potential, as patients have been given dangerous recommendations. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

De ce să nu-ți iei sfaturile medicale de la inteligență artificială. Afirmațiile neinspirate ale unui chatbot "doctor"

2026-03-12
PLAYTECH.ro
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (chatbots based on AI) used for medical advice. The studies reveal that these AI systems have provided incorrect or misleading medical information, which can directly or indirectly cause harm to individuals' health by influencing their medical decisions improperly. This fits the definition of an AI Incident because the AI's use has led to or could lead to injury or harm to health. The article describes realized harms and risks, not just potential future hazards, and does not focus on governance or research updates alone, so it is not Complementary Information. Therefore, the classification is AI Incident.
Thumbnail Image

Modelele AI pot oferi sfaturi medicale dezastruoase (studii)

2026-03-12
AGERPRES
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (LLMs/chatbots) that generate medical advice. The studies demonstrate that these AI systems often fail to distinguish false medical claims and provide misleading advice confidently, which can directly harm users' health. The harm is realized because millions of people use these chatbots for medical questions, and the misinformation can lead to dangerous health decisions. This fits the definition of an AI Incident as the AI system's use has directly led to harm to health of people. The article does not merely warn about potential harm but documents actual problematic outputs and their implications, confirming realized harm rather than just plausible future harm.
Thumbnail Image

Medicii trag semnalul de alarmă: Atenție la sfaturile medicale oferite de chatboți!

2026-03-12
Doctorul Zilei
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (large language models/chatbots) used for medical advice. While no specific harm or injury is reported as having occurred, the article clearly outlines the plausible risk that these AI systems could lead to harm by disseminating medical misinformation that users might accept and act upon. This fits the definition of an AI Hazard, as the development and use of these AI chatbots could plausibly lead to harm to health (harm category a). The article is a warning based on research findings rather than a report of an actual incident or realized harm, so it is not an AI Incident. It is also not merely complementary information or unrelated news, as the focus is on the potential for harm from AI use in medical advice.
Thumbnail Image

Riscurile la care se expun persoanele care cer sfaturi medicale chatboților de inteligență artificială

2026-03-12
antenasatelor.ro
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (LLM chatbots) used for medical advice. The research shows these AI systems can generate false or misleading medical information, which could plausibly lead to harm to users' health if they rely on such advice. While no direct harm is documented in the article, the credible risk of future harm from reliance on inaccurate AI medical advice qualifies this as an AI Hazard under the framework. The article does not describe a realized harm or incident but warns of plausible future harm due to AI system use and malfunction (inaccurate outputs).
Thumbnail Image

Modelele AI pot oferi sfaturi medicale dezastruoase: de la lapte rece la usturoi rectal

2026-03-12
spotmedia.ro
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (LLMs/chatbots) used for medical advice, which have been shown to provide false or misleading health information with high confidence. This misinformation can directly harm individuals relying on such advice, fulfilling the criterion of injury or harm to health. The article documents realized harm through the dissemination of dangerous medical misinformation, not just potential harm. Therefore, this qualifies as an AI Incident under the framework, as the AI's use has directly led to harm to people.
Thumbnail Image

Studiu: Chatboții de inteligență artificială pot oferi sfaturi medicale greșite cu aceeași încredere ca pe cele corecte și nu sunt mai utili decât o căutare pe Internet

2026-03-13
Edupedu.ro
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (LLMs/chatbots) used for medical advice, which is explicitly mentioned. The study shows that these AI systems have directly led to the dissemination of incorrect medical information with high confidence, which can harm users' health decisions. This fits the definition of an AI Incident because the AI system's use has directly or indirectly led to harm to the health of people (harm category a). The article describes realized harm potential and documented evidence of AI systems agreeing with medical misinformation, not just a hypothetical risk, so it is not merely a hazard. It is not complementary information because the article focuses on the harm caused by the AI's outputs rather than responses or governance. Therefore, the classification is AI Incident.
Thumbnail Image

Ce recomandă IA celor care cer sfaturi medicale: să-și bage usturoi în anus și să bea lapte rece - Stiripesurse.md

2026-03-13
Stiripesurse.md
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (chatbots using large language models) providing medical advice that is factually incorrect and potentially harmful, which has already occurred and is ongoing as millions use these systems daily. The AI's use has directly led to health-related misinformation, posing a real risk of injury or harm to people. Therefore, this qualifies as an AI Incident under the definition of harm to health caused by the use of AI systems.
Thumbnail Image

De la lapte rece pentru sângerări esofagiene la usturoi rectal pentru imunitate: chatboții AI răspândesc informații medicale false

2026-03-16
Digi24
Why's our monitor labelling this an incident or hazard?
The event clearly involves AI systems (chatbots powered by large language models) whose use has directly led to the dissemination of false medical information that can cause harm to health (harm category a). The article cites studies demonstrating that these AI systems fail to reliably detect falsehoods in medical advice and can confidently provide dangerous recommendations. This constitutes an AI Incident because the AI system's use has directly led to harm or risk of harm to people due to misleading medical advice. The harm is realized in the form of misinformation that can lead to injury or health harm if acted upon by users. The article does not merely warn about potential future harm but documents ongoing issues with AI chatbots' medical advice reliability, thus qualifying as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Study confirms what docs say: ChatGPT unreliable for health advice

2026-03-12
India Today
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (chatbots using large language models) used for health advice. The study shows that these AI systems' outputs are unreliable and can mislead users, which constitutes indirect harm to health (harm category a). The harm arises from the AI system's use and its failure to provide accurate, reliable medical advice, which can lead to injury or harm to persons relying on it. Hence, this qualifies as an AI Incident because the AI system's use has directly or indirectly led to harm to health.
Thumbnail Image

AI Chatbots Miss More Than Half of Medical Diagnoses, Study Finds

2026-03-11
CNET
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (LLMs) used for medical diagnosis and advice, which directly relates to health outcomes. The study shows that the AI systems frequently provide incorrect or incomplete diagnoses and follow-up steps, and users often rely on these outputs, potentially leading to harm. This constitutes an AI Incident because the AI system's use has directly or indirectly led to harm to health (a). The harm is realized or highly plausible given the widespread use and the documented inaccuracies. Therefore, this event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

AI advised someone to stick garlic where the sun don't shine

2026-03-13
Metro
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (LLMs) explicitly mentioned as providing medical advice. The advice includes harmful misinformation that could cause injury, fulfilling the criterion of harm to health (a). The AI systems' use has directly led to this harm by confidently presenting false medical recommendations. The study documents actual instances of such advice being given, indicating realized harm rather than just potential. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

'Rectal garlic insertion for immune support': Medical chatbots confidently give disastrously misguided advice, experts say

2026-03-11
livescience.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (LLMs/chatbots) that generate medical advice. Their use has directly led to the dissemination of false and potentially harmful medical information, such as recommending rectal garlic insertion for immune support, which could cause injury or harm to health. The article documents realized harm in the form of dangerous advice being given and the risk of users acting on it, thus meeting the definition of an AI Incident. The harm is linked to the AI system's use and its failure to correctly evaluate medical claims, leading to misinformation that could injure people.
Thumbnail Image

Insert garlic, drink milk and avoid exercise: AI chatbots endorse dubious medical claims

2026-03-13
Perth Now
Why's our monitor labelling this an incident or hazard?
The AI chatbots are AI systems that generate medical advice based on input prompts. The study shows that these AI systems have endorsed false medical claims that could cause injury or harm to people's health if acted upon, fulfilling the criteria for an AI Incident under harm category (a). The harm is realized in the sense that the AI systems are actively providing misleading health information, which can lead to health risks. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

AI telling people to stick garlic up their bums for better health

2026-03-13
accrington
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (large language models/chatbots) providing medical advice that is false and potentially harmful, which constitutes direct harm to health (a). The misinformation is presented confidently, increasing the risk that users might follow dangerous recommendations, such as rectal garlic insertion. The study documents actual instances of such advice being given, indicating realized harm rather than just potential. Hence, this qualifies as an AI Incident due to the direct link between AI system outputs and health-related harm through misinformation.
Thumbnail Image

Health chatbots can be useful, but they should not substitute a doctor

2026-03-13
Standard Digital News - Kenya
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (health chatbots like ChatGPT Health) and discusses its use and performance in simulated medical scenarios. The research identifies instances where the AI system's advice could lead to harm, such as underestimating the urgency of medical conditions or inconsistent crisis support, which could indirectly cause injury or harm to individuals relying on the chatbot. Although the harm is not reported as having occurred in real patients, the study's findings demonstrate that the AI system's use has directly led to potentially unsafe advice, constituting an AI Incident due to the realized risk of harm from the AI's outputs in health contexts.