Study Finds ChatGPT Health AI Fails in Emergency Medical Triage

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

An independent study found that ChatGPT Health, an AI medical guidance tool used by millions, failed to recommend emergency care in over half of serious cases and inconsistently flagged suicide risks. Researchers warn these triage failures pose significant health risks for users relying on the AI for urgent decisions.[AI generated]

Why's our monitor labelling this an incident or hazard?

ChatGPT Health is an AI system providing health guidance. The study shows that its use has led to undertriage of serious medical emergencies and inconsistent suicide-crisis alerts, which can cause harm to users' health by delaying or preventing necessary emergency care. Although the harm is indirect (due to the AI's failure to recommend appropriate emergency responses), the risk and actual instances of undertriage constitute injury or harm to persons. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly or indirectly led to harm to health.[AI generated]
AI principles
SafetyAccountability

Industries
Healthcare, drugs, and biotechnology

Affected stakeholders
Consumers

Harm types
Physical (injury)Physical (death)

Severity
AI incident

AI system task:
Interaction support/chatbotsOrganisation/recommenders


Articles about this incident or hazard

Thumbnail Image

ChatGPT Health fails to direct users to emergency care in more than half of serious cases: Study

2026-02-24
Anadolu Ajansı
Why's our monitor labelling this an incident or hazard?
ChatGPT Health is an AI system providing health guidance. The study shows that its use has led to undertriage of serious medical emergencies and inconsistent suicide-crisis alerts, which can cause harm to users' health by delaying or preventing necessary emergency care. Although the harm is indirect (due to the AI's failure to recommend appropriate emergency responses), the risk and actual instances of undertriage constitute injury or harm to persons. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly or indirectly led to harm to health.
Thumbnail Image

First study of ChatGPT Health questions triage efficacy | TechTarget

2026-02-24
TechTarget
Why's our monitor labelling this an incident or hazard?
ChatGPT Health is an AI system providing medical triage advice. The study reveals that its outputs can be inconsistent and potentially unsafe, especially in critical emergency scenarios, which could lead to injury or harm to users relying on it for urgent medical decisions. Although no specific harm event is reported, the findings indicate realized risks and indirect harm through misleading advice. Therefore, this qualifies as an AI Incident due to the direct link between the AI system's use and potential harm to health.
Thumbnail Image

ChatGPT Health fails critical emergency and suicide safety tests

2026-02-24
News-Medical.net
Why's our monitor labelling this an incident or hazard?
ChatGPT Health is an AI system (a large language model-based tool) used by millions for health advice, including emergency triage and suicide risk assessment. The study documents that the AI system's outputs have directly led to inappropriate guidance in serious medical emergencies and suicide risk scenarios, which constitutes harm to individuals' health. The AI system's malfunction or inadequate performance in these critical areas meets the criteria for an AI Incident, as it has directly led to harm or risk of harm to persons relying on its advice. Therefore, this event is classified as an AI Incident.
Thumbnail Image

Research identifies blind spots in AI medical triage

2026-02-24
EurekAlert!
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (ChatGPT Health) used for medical triage and health guidance. The study found that the AI system under-triages more than half of emergency cases and inconsistently issues suicide-risk alerts, sometimes failing to warn in high-risk situations. These failures directly relate to harm to health (injury or death risk) of users relying on the system's advice. The AI system's use has thus directly or indirectly led to harm or significant risk of harm, meeting the criteria for an AI Incident. The article does not merely discuss potential future harm or general AI developments but documents concrete safety failures with real implications for health outcomes.
Thumbnail Image

Mount Sinai researchers raise safety concerns about ChatGPT Health

2026-02-24
Hospital Review
Why's our monitor labelling this an incident or hazard?
ChatGPT Health is an AI system (a large language model-based chatbot) used for medical guidance. The study shows that its use has led to under-triaging serious medical cases and inconsistent suicide-risk alerts, which could directly cause harm to users' health if they rely on its advice. This constitutes an AI Incident because the AI system's use has directly led to harm or risk of harm to persons. Although the article does not report actual patient harm, the under-triaging of emergencies and failure to consistently alert suicide risk represent realized harms in the form of unsafe medical guidance, which is a direct health risk. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Mount Sinai raises concerns over ChatGPT triage safety

2026-02-24
Hospital Review
Why's our monitor labelling this an incident or hazard?
ChatGPT Health is an AI system providing triage advice. The study found that its use led to undertriage in emergency cases and inconsistent responses to suicide risk, which could directly or indirectly cause harm to patients by delaying or missing critical care. This constitutes an AI Incident because the AI system's use has directly led to potential harm to health, fulfilling the criteria for injury or harm to persons due to AI system malfunction or misuse.
Thumbnail Image

Study Reveals Overlooked Flaws in AI-Powered Medical Triage Systems

2026-02-24
Scienmag: Latest Science and Health News
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT Health) designed to provide medical triage and suicide risk assessment. The study documents that the AI system's outputs have failed to recommend appropriate emergency care in many urgent cases and inconsistently handle suicide risk, which could directly cause injury or harm to users. Although the article focuses on the study's findings rather than specific reported incidents, the documented failures indicate realized harm risks and unsafe AI use in critical health contexts. This meets the criteria for an AI Incident because the AI system's use has directly led to significant harm risks to individuals' health and safety.
Thumbnail Image

Study: ChatGPT Health missed emergency referrals

2026-02-24
Yeni Şafak
Why's our monitor labelling this an incident or hazard?
ChatGPT Health is an AI system used for medical guidance. The study reveals that its use has led to undertriage in serious medical emergencies and inconsistent crisis alerts, which can cause harm to individuals relying on it for urgent health decisions. The harm is related to injury or harm to health (a) due to incorrect triage recommendations. Therefore, this qualifies as an AI Incident because the AI system's use has directly or indirectly led to harm or risk of harm in health contexts.
Thumbnail Image

Mount Sinai study: ChatGPT Health failed to flag many life-threatening cases

2026-02-25
Crains New York Business
Why's our monitor labelling this an incident or hazard?
ChatGPT Health is an AI system designed to provide medical triage recommendations. The study shows that it under-triages serious emergencies, such as diabetic ketoacidosis and respiratory failure, which can be fatal if not treated promptly. This failure to correctly triage and direct users to emergency care constitutes harm to health (a), fulfilling the criteria for an AI Incident. The harm is realized as the system's outputs could lead users to delay or avoid necessary emergency care, posing a direct risk to life. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

'Unbelievably dangerous': experts sound alarm after ChatGPT Health fails to recognise medical emergencies

2026-02-26
The Guardian
Why's our monitor labelling this an incident or hazard?
ChatGPT Health is an AI system providing medical advice. The study shows it under-triages over half of emergency cases and fails to detect suicidal ideation properly, which could directly lead to injury or death (harm to health). The AI's malfunction or inadequate performance in these scenarios constitutes an AI Incident because the harm is realized or highly likely, and the AI system's role is pivotal in causing this harm or risk. The article details concrete evidence of harm and risk, not just potential or hypothetical concerns, thus qualifying as an AI Incident.
Thumbnail Image

ChatGPT Health Under-Assesses Emergencies, Big Study Warns Of Safety Risks

2026-02-27
NDTV
Why's our monitor labelling this an incident or hazard?
ChatGPT Health is an AI system designed to provide health guidance and triage recommendations. The study shows that its use led to under-triaging of emergency medical cases, which can directly cause harm by delaying necessary urgent care. This constitutes harm to health (a) as defined in the framework. The AI system's malfunction or limitations in performance are central to the risk of harm. Therefore, this event qualifies as an AI Incident because the AI system's use has directly or indirectly led to significant potential harm to individuals' health.
Thumbnail Image

Where ChatGPT Health fails -- and how it could turn deadly

2026-02-27
New York Post
Why's our monitor labelling this an incident or hazard?
ChatGPT Health is an AI system used for medical advice. The study shows that its use has directly led to failures in recommending urgent care in over half of serious cases and inconsistent suicide crisis alerts, which can cause injury or death. This meets the definition of an AI Incident because the AI system's malfunction has directly led to harm to health (harm category a). The article reports realized harm risks, not just potential, and highlights the system's failure to act appropriately in emergencies, confirming direct involvement of the AI system in causing harm or risk thereof.
Thumbnail Image

Can AI be trusted in emergencies? Study raises red flags on ChatGPT health

2026-02-27
India Today
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (ChatGPT Health) used for medical triage, which is a high-stakes application affecting health outcomes. The study demonstrates that the AI system under-triages urgent cases, which could plausibly lead to harm (injury or health deterioration) if users follow its recommendations. Although no actual harm is reported, the credible risk of harm due to under-assessment and inconsistent crisis messaging meets the definition of an AI Hazard. The event does not describe a realized harm (incident) but a plausible future harm based on the AI's performance limitations. Hence, it is classified as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Independent Review Raises Safety Concerns Over ChatGPT Health Feature

2026-02-27
RTTNews
Why's our monitor labelling this an incident or hazard?
The ChatGPT Health feature is an AI system designed to provide medical triage recommendations. The independent study revealed that the system frequently underestimates the severity of urgent medical conditions, advising users to wait or seek routine care instead of immediate hospital treatment. This failure in the AI's guidance can directly cause injury or harm to users' health, as users may delay critical care based on the AI's advice. The harm is realized in the form of incorrect triage recommendations that could lead to worsened health outcomes. Hence, this event qualifies as an AI Incident due to direct harm to health caused by the AI system's use and malfunction in medical urgency assessment.
Thumbnail Image

ChatGPT Health Incorrectly Assesses Over Half of Medical Emergencies, Study Warns

2026-02-27
bbntimes.com
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (ChatGPT Health) that provides medical recommendations. The study demonstrates that the AI system's use has directly led to incorrect medical advice that could cause injury or death, fulfilling the criteria for harm to health (a). The AI system's malfunction in assessment and triage is central to the event. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to significant harm to health, as evidenced by the study's findings and expert commentary.
Thumbnail Image

ChatGPT Health 'Unbelievably Dangerous' - Guardian Liberty Voice

2026-02-27
Guardian Liberty Voice
Why's our monitor labelling this an incident or hazard?
ChatGPT Health is an AI system (a large language model-based chatbot) used for medical advice. The research shows that it fails to appropriately triage emergencies and suicide risk, which directly risks users' health and safety. The AI's outputs have led or could lead to harm by misinforming users about urgent care needs. This meets the definition of an AI Incident because the AI system's use has directly or indirectly led to harm to health. The article documents realized harm risks, not just potential hazards, and discusses the system's malfunction and misuse in clinical contexts.
Thumbnail Image

ChatGPT Health does not flag more than 50% of medical emergencies

2026-02-27
ExBulletin
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (ChatGPT Health) used for medical triage and health advice. The study shows that the AI system's outputs have directly led to incorrect medical recommendations, which constitute harm to the health of individuals (harm category a). The AI system's failure to detect suicidal ideation and misdirecting emergency cases represent direct harms caused by its malfunction or use. Therefore, this qualifies as an AI Incident due to realized harm linked to the AI system's use and malfunction.
Thumbnail Image

'Unbelievably dangerous': ChatGPT Health may miss life-threatening

2026-02-28
The Business Standard
Why's our monitor labelling this an incident or hazard?
ChatGPT Health is an AI system designed to provide medical advice by analyzing user data. The study shows that its outputs can mislead users into not seeking urgent care when needed, which directly risks injury or death (harm to health). The AI's failure to act appropriately in emergencies and its flawed guardrails for suicide ideation demonstrate malfunction or misuse leading to harm. Therefore, this event meets the criteria for an AI Incident due to direct and significant harm to health caused by the AI system's use.
Thumbnail Image

ChatGPT could miss your serious medical emergency, new study suggests

2026-03-02
Fox News
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (ChatGPT Health) used for medical advice. The study shows that the AI system under-triages emergencies and inconsistently flags suicide risk, which can directly lead to injury or harm to health (harm category a). The lawsuit alleging ChatGPT encouraged suicide further supports that harm has occurred. These facts meet the criteria for an AI Incident, as the AI system's use and malfunction have directly or indirectly caused harm to individuals' health and safety. The article does not merely discuss potential risks or general AI developments but documents realized harms and legal actions related to the AI's performance.
Thumbnail Image

ChatGPT Health Is Staggeringly Bad at Recognizing Life-Threatening Medical Emergencies

2026-03-01
Futurism
Why's our monitor labelling this an incident or hazard?
ChatGPT Health is explicitly an AI system providing health advice. The study shows that its use has directly led to dangerous mis-triage, where patients needing immediate hospital care were advised otherwise, posing a significant risk of injury or death. This constitutes direct harm to health (harm category a). The article also references potential legal liabilities and previous harms linked to the AI, reinforcing the seriousness of the issue. Hence, the event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

ChatGPT Health misses urgent medical crises over 50% of the time

2026-03-03
PCWorld
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT Health) used for medical assessment. The study shows that the AI system's use has directly led to significant risks of harm to health, as it misclassifies urgent medical conditions, potentially causing injury or death due to delayed treatment. This constitutes an AI Incident because the AI system's use has directly led to harm or significant risk of harm to people.
Thumbnail Image

ChatGPT Health Tool Isn't So Great in a Crisis

2026-03-03
Newser
Why's our monitor labelling this an incident or hazard?
ChatGPT Health is an AI system used for medical triage advice. The research shows that its outputs directly underestimated emergency severity, which could lead to injury or harm to patients if followed. This constitutes an AI Incident because the AI system's malfunction or erroneous advice has directly led to potential harm to health, fulfilling the criteria for harm (a).
Thumbnail Image

ChatGPT Health 'under-triaged' half of medical emergencies in a new study

2026-03-03
NBC Southern California
Why's our monitor labelling this an incident or hazard?
ChatGPT Health is an AI system used for medical triage. The study demonstrates that its use has directly led to under-triaging of emergency medical cases, which can cause injury or harm to patients by delaying necessary emergency care. This constitutes direct harm to health caused by the AI system's outputs. Although the system is not intended for diagnosis or treatment and is still in limited use, the documented under-triaging represents realized harm, not just potential harm. Therefore, this event qualifies as an AI Incident due to direct harm to health caused by the AI system's use.
Thumbnail Image

Researchers warn about ChatGPT's new health service

2026-03-02
Computerworld
Why's our monitor labelling this an incident or hazard?
ChatGPT Health is an AI system providing health advice. The study shows that in many cases, it advised patients incorrectly by not recommending urgent hospital care when necessary, which could lead to injury or harm to health. This constitutes an AI Incident because the AI system's use has directly or indirectly led to harm to persons' health through erroneous recommendations.
Thumbnail Image

ChatGPT Health, new study shows half of emergency medical visits are 'inadequately triaged' - ExBulletin

2026-03-03
ExBulletin
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system, ChatGPT Health, which is used to triage medical emergencies. The study demonstrates that the AI system's outputs have directly led to under-triaging of serious medical emergencies, which constitutes harm to health (a). The AI system's malfunction or limitations in providing accurate triage have caused or could cause injury or harm to patients. This fits the definition of an AI Incident, as the AI system's use has directly led to harm or risk of harm to individuals' health. The article does not merely discuss potential future harm or general AI-related information but reports on realized inadequacies with direct health implications.
Thumbnail Image

Study Reveals ChatGPT Under-Trial for Half of Medical Emergencies

2026-03-04
El-Balad.com
Why's our monitor labelling this an incident or hazard?
ChatGPT Health is an AI system used for medical triage. The study shows that its outputs have led to under-triaging of emergency cases, which can cause injury or harm to patients by delaying necessary emergency care. The inconsistent handling of suicidal ideation further demonstrates potential harm to health. Since the AI system's use has directly contributed to these harms, this event meets the criteria for an AI Incident under the definition of injury or harm to health caused by AI system use.
Thumbnail Image

OpenAI's ChatGPT Health chatbot struggles to identify urgent medical cases, study reveals

2026-03-05
MoneyControl
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (ChatGPT Health) used for medical triage, which is a task requiring complex AI inference. The study shows that the AI system's outputs have directly led to underestimation of emergency severity, which could cause harm to patients by delaying critical care, fulfilling the criteria for an AI Incident. The harm is realized or at least strongly evidenced by the study's findings, not merely potential. Therefore, this event qualifies as an AI Incident due to direct harm to health caused by the AI system's use.
Thumbnail Image

ChatGPT Health Underestimates Medical Emergencies, Study Finds

2026-03-04
Gizmodo
Why's our monitor labelling this an incident or hazard?
ChatGPT Health is an AI system designed to provide health advice and triage. The study shows that its outputs have led to under-triage in over half of emergency cases tested, which can cause injury or harm to health by delaying necessary emergency care. Additionally, inconsistent suicide-risk alerts pose a mental health risk. These harms are directly linked to the AI system's use and its malfunction or limitations in clinical judgment. Therefore, this qualifies as an AI Incident due to realized harm to health caused by the AI system's outputs.
Thumbnail Image

Is ChatGPT Health safe? Study finds AI missed half of medical emergencies

2026-03-05
Digit
Why's our monitor labelling this an incident or hazard?
ChatGPT Health is an AI system designed to infer from user input (symptoms) and generate medical advice. The study shows that in over half of real emergency cases, the AI advised patients to delay urgent care, which could lead to injury or death. This is a direct link between the AI system's outputs and potential harm to health. The article also notes that millions already use ChatGPT for health advice, increasing the risk of harm. Despite OpenAI's argument about intended use, the demonstrated failure in critical scenarios constitutes an AI Incident because the AI's malfunction or misuse has directly led to significant health risks.
Thumbnail Image

Would ChatGPT Health Recognize Your Medical Emergency? New Study Raises Doubts

2026-03-04
eWEEK
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as ChatGPT Health, an AI chatbot designed to provide medical advice. The study demonstrates that the AI system's use led to misclassification of serious medical emergencies, which could cause injury or harm to patients if they follow the AI's incorrect guidance. This meets the definition of an AI Incident, as the AI system's use directly led to harm to health. The article also includes expert warnings and company responses, but the core issue is the AI system's failure causing potential harm, not just a hazard or complementary information.
Thumbnail Image

ChatGPT Health Underestimates Medical Emergencies, Study Finds - thedigitalweekly.com

2026-03-04
wordpress-479853-1550526.cloudwaysapps.com
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (ChatGPT Health) used for medical triage and crisis intervention. The study demonstrates that the AI system's outputs have directly led to under-triaging of emergencies, which can cause harm to individuals by delaying critical care, fulfilling the criteria for injury or harm to health (a). The inconsistent suicide risk alerts further indicate a failure in the AI's crisis detection function, posing risks to vulnerable users. These harms are realized or highly plausible given the reliance of millions on the system for health advice. Therefore, this qualifies as an AI Incident due to direct harm caused by the AI system's use and malfunction in medical emergency triage and crisis detection.
Thumbnail Image

ChatGPT Health delays care in over 50% of emergency-level cases, finds study

2026-03-05
Firstpost
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system, ChatGPT Health, which provides medical guidance based on user input and medical records. The study shows that the AI system's outputs have led to incorrect triage advice in over half of emergency-level cases, which could directly cause harm to patients by delaying necessary urgent care. This is a clear example of harm to health (a) caused by the AI system's use. The involvement is in the use of the AI system, and the harm is realized in the simulation and plausibly occurring in real-world use given the system's widespread adoption. Therefore, this event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Is ChatGPT Health Reliable? Study Finds It 'Underestimating' Health Concerns, Emergencies

2026-03-05
Tech Times
Why's our monitor labelling this an incident or hazard?
ChatGPT Health is an AI system designed to provide medical advice, thus qualifying as an AI system. The study reveals that its outputs underestimate serious medical conditions, which could plausibly lead to harm to users' health if they follow the advice to delay emergency care. Although no actual injury or harm is reported, the AI system's use presents a credible risk of harm, fitting the definition of an AI Hazard. The article focuses on the evaluation and potential risks rather than reporting a realized incident. Therefore, this event is best classified as an AI Hazard due to the plausible future harm from the AI system's unreliable triage recommendations.
Thumbnail Image

ChatGPT Health missed emergency care in over half of cases: Study

2026-03-05
Techlusive
Why's our monitor labelling this an incident or hazard?
ChatGPT Health is an AI system providing medical recommendations. The study shows that its use led to incorrect triage advice in emergency cases, which could directly cause injury or harm to patients by delaying critical care. The AI's failure to recommend immediate care in over half of emergency scenarios is a direct link to potential health harm. This meets the definition of an AI Incident, as the AI system's use has directly led to harm or significant risk of harm to health.
Thumbnail Image

AI For Health: Boon Or Quack? ChatGPT Health Underestimates Severity In Over 50% Cases, Says Report

2026-03-05
NDTV Profit
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (ChatGPT Health, a healthcare-oriented large language model) whose use in medical triage has directly led to underestimation of emergency severity in many cases, posing a direct risk of injury or death to users. The harm is realized or highly plausible given the examples of life-threatening conditions being under-triaged. The AI system's malfunction or misuse (users relying on its advice) is central to the harm. This fits the definition of an AI Incident as the AI system's use has directly or indirectly led to harm to health.
Thumbnail Image

ChatGPT Santé : l'assistant médical rate une urgence vitale sur deux, qui l'eût cru ?

2026-03-02
CommentCaMarche
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT Santé) whose use has directly led to harm by providing incorrect medical advice that could delay urgent treatment, thereby risking injury or death. The study's findings demonstrate that the AI system's outputs are unreliable and potentially dangerous, fulfilling the criteria for an AI Incident under harm to health (a). The article also mentions ongoing lawsuits related to harm following use of the chatbot, reinforcing the realized harm. Therefore, this is classified as an AI Incident.
Thumbnail Image

ChatGPT Health : quand l'IA passe à côté des urgences médicales critiques

2026-03-03
Santé Magazine
Why's our monitor labelling this an incident or hazard?
ChatGPT Health is an AI system providing medical advice. The study shows that its outputs are inconsistent and sometimes dangerously misleading, failing to alert users to seek emergency care when needed, which could directly or indirectly lead to harm to individuals' health. This fits the definition of an AI Incident because the AI system's malfunction or inadequate performance has led to a risk of injury or harm to persons, and the study highlights actual problematic behavior rather than just potential risk. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

ChatGPT : ces ratés qui font craindre aux médecins le pire en cas d'urgence vitale - Top Santé

2026-03-02
Topsante.com
Why's our monitor labelling this an incident or hazard?
ChatGPT Health is an AI system used for medical triage, and the article details its use leading to dangerous underestimation of medical emergencies, which constitutes harm to health (a). The AI's erroneous outputs could directly cause injury or death if users follow its advice to delay emergency care. This meets the definition of an AI Incident because the AI system's use has directly led to harm or significant risk of harm in real cases simulated and tested. The article describes realized harm in the form of unsafe recommendations, not just potential future harm, so it is not merely an AI Hazard. It is not Complementary Information because the main focus is on the evaluation revealing harmful failures, not on responses or governance. Therefore, the event is classified as an AI Incident.
Thumbnail Image

Urgences médicales : ChatGPT se plante une fois sur deux

2026-03-03
Economie Matin
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (ChatGPT Santé) used in medical emergency triage. The study demonstrates that the AI system's outputs have directly led to incorrect medical advice in over half of urgent cases, which can cause injury or harm to patients. This is a direct link between the AI system's use and harm to health, fulfilling the definition of an AI Incident. The presence of bias and inconsistent alerts further supports the system's malfunction or misuse leading to harm. Therefore, this event is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

" Incroyablement dangereux " : ChatGPT Santé ne reconnaît pas les urgences médicales et n'a pas recommandé de consultation à l'hôpital alors que cela était médicalement nécessaire dans plus de la moitié des cas

2026-03-02
Developpez.com
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (ChatGPT Health) that provides medical advice. The study shows that the AI system's outputs failed to correctly identify urgent medical situations in over half the cases, potentially leading users to delay or avoid necessary emergency care, which is a direct risk of injury or harm to health. Additionally, inconsistent suicide risk alerts further demonstrate malfunction with serious safety implications. These factors meet the criteria for an AI Incident because the AI system's malfunction has directly led to significant harm risks to individuals' health. The event is not merely a potential hazard or complementary information but documents realized failures with direct health consequences.
Thumbnail Image

ChatGPT Provided Wrong Advice In Over 50% Medical Emergencies Tested

2026-03-08
Forbes
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (ChatGPT) used for medical advice, which is a clear AI system as it generates outputs (medical recommendations) based on input scenarios. The study documents that ChatGPT's outputs were often incorrect or dangerously misleading, particularly in emergencies, which can directly lead to injury or harm to health (harm category a). The AI system's use in this context has directly led to realized harm risks, as the incorrect advice could cause patients to delay or avoid necessary emergency care, or receive unnecessary care, both of which are harmful outcomes. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information, since the harm is realized and documented in the study.
Thumbnail Image

The dangers of asking ChatGPT your health questions

2026-03-06
Euronews English
Why's our monitor labelling this an incident or hazard?
ChatGPT Health is an AI system providing medical advice. The study shows that its use has directly led to harm risks by failing to advise emergency care in serious cases and inconsistently responding to self-harm intentions, which are harms to health (a). The AI system's malfunction or inadequate performance in critical scenarios is central to the harm. Therefore, this qualifies as an AI Incident because the AI system's use has directly or indirectly led to harm to persons' health.
Thumbnail Image

Should you use ChatGPT for medical advice? New study urges caution against total reliance on AI

2026-03-06
The News International
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (ChatGPT) for medical advice, which is explicitly mentioned. The study highlights that the AI's outputs can underestimate serious medical emergencies, which could indirectly lead to harm to individuals' health if users rely solely on the AI and delay seeking urgent care. Although no specific harm is reported as having occurred, the findings reveal a credible risk of harm due to the AI system's limitations in this critical application. Therefore, this event qualifies as an AI Hazard because it plausibly could lead to harm through misuse or overreliance on the AI system for medical decisions without proper professional oversight.
Thumbnail Image

ChatGPT misses 'high-risk emergencies' when it is used as a doctor, study finds

2026-03-05
The Independent
Why's our monitor labelling this an incident or hazard?
ChatGPT Health is an AI system designed to provide health-related recommendations. The study shows that its outputs are insufficiently reliable in identifying urgent medical emergencies, which directly implicates the AI system's use in potentially causing harm to users who might not receive timely emergency care. This constitutes an AI Incident because the AI system's use has directly led to a significant risk of harm to health, fulfilling the criteria of harm to persons due to AI system malfunction or inadequacy in its outputs.
Thumbnail Image

ChatGPT Underestimates Some Urgent Medical Cases

2026-03-06
BGNES: Breaking News, Latest News and Videos
Why's our monitor labelling this an incident or hazard?
ChatGPT Health is an AI system used for medical advice. The study shows that its outputs can lead to underestimation of urgent medical conditions, which can cause harm to users' health if they rely on its advice instead of seeking immediate care. This constitutes an AI Incident because the AI system's use has directly led to a risk of injury or harm to persons, fulfilling the criteria of harm to health. The article reports realized harm potential and documented failures in the AI's recommendations, not just theoretical risks, thus qualifying as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

"Incredibil de periculos": experții trag un semnal de alarmă după ce ChatGPT Health nu a recunoscut urgențele medicale

2026-03-09
Digi24
Why's our monitor labelling this an incident or hazard?
ChatGPT Health is an AI system used for medical triage and advice. The study shows that its use has directly led to underestimation of medical emergencies in over half of the cases tested, which could realistically cause harm or death if users follow its advice. This constitutes injury or harm to health (harm category a). The AI system's malfunction in providing safe and accurate medical recommendations is central to the risk and potential harm described. Therefore, this event qualifies as an AI Incident because the AI system's use has directly or indirectly led to significant harm to health.
Thumbnail Image

Semnal de alarmă: ChatGPT nu înţelege mereu corect severitatea afecţiunilor medicale

2026-03-05
Doctorul Zilei
Why's our monitor labelling this an incident or hazard?
ChatGPT Health is an AI system designed to provide health-related advice and triage recommendations. The study demonstrates that its outputs often underestimate or overestimate the severity of medical conditions, which can mislead users about the urgency of seeking medical care. This misguidance can directly or indirectly cause harm to users' health by delaying necessary emergency treatment or causing unnecessary medical visits. The article clearly describes realized harm risks stemming from the AI system's use, meeting the criteria for an AI Incident involving harm to health. Therefore, this event is classified as an AI Incident.
Thumbnail Image

ChatGPT nu înţelege mereu corect severitatea afecţiunilor de sănătate

2026-03-05
News.ro
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (ChatGPT Health) used for health triage. The study shows that the AI system's outputs have directly led to incorrect assessments of health severity, which can cause harm to individuals by delaying necessary urgent care or causing unnecessary medical interventions. This meets the criteria for an AI Incident because the AI system's use has directly led to harm to health or the risk thereof, and the inconsistency in recommendations further supports the presence of malfunction or misuse. Therefore, this event is classified as an AI Incident.
Thumbnail Image

ChatGPT nu înțelege mereu corect severitatea afecțiunilor de sănătate

2026-03-05
Profit.ro
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT Health) used for medical triage, which is a high-stakes application affecting health outcomes. The study reveals that the AI system's use could plausibly lead to harm (e.g., delayed emergency care or unnecessary medical visits), indicating a credible risk of injury or harm to health. Since no actual harm or incident is reported, but a plausible risk is demonstrated, this qualifies as an AI Hazard rather than an AI Incident. The article does not describe a realized harm event but highlights potential dangers in the AI's use.
Thumbnail Image

ChatGPT nu înţelege mereu corect severitatea afecţiunilor de sănătate - Stiripesurse.md

2026-03-07
Stiripesurse.md
Why's our monitor labelling this an incident or hazard?
ChatGPT Health is an AI system used for health triage, which involves making predictions and recommendations about patient care urgency. The study demonstrates that the AI system underestimates severity in over half the cases and overestimates in many others, which can lead to harm by either delaying necessary urgent care or causing unnecessary medical consultations. This constitutes an AI Incident because the AI system's use has directly led to harm or risk of harm to health, fulfilling the criteria of injury or harm to a person or group of people due to AI system use.
Thumbnail Image

ChatGPT commet des erreurs une fois sur deux pour les urgences médicales

2026-03-11
20minutes
Why's our monitor labelling this an incident or hazard?
ChatGPT is an AI system providing medical advice. The study demonstrates that its outputs are frequently incorrect or insufficiently cautious in urgent medical situations, which can directly cause harm to users relying on its guidance. The article describes realized errors and misjudgments, not just potential risks, indicating actual harm or at least a significant risk of harm. Hence, this qualifies as an AI Incident due to direct or indirect harm to health caused by the AI system's use.
Thumbnail Image

L'IA se tromperait une fois sur deux en situation d'urgence, selon une nouvelle étude

2026-03-13
Sudinfo.be
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly (ChatGPT Santé) used for medical advice. The study demonstrates that the AI system's recommendations are frequently incorrect or inappropriate in emergency medical situations, which can directly lead to harm or injury if users follow the advice. The harm is related to health risks from misdiagnosis or delayed treatment, fulfilling the criteria for harm to persons. The article describes realized harm potential through the AI's erroneous outputs, not just theoretical risk, thus constituting an AI Incident rather than a hazard or complementary information.
Thumbnail Image

ChatGPT se trompe à plus de 50% quand on l'interroge sur des urgences médicales

2026-03-10
Slate.fr
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (ChatGPT) whose outputs in medical emergency scenarios have been empirically shown to be frequently incorrect and potentially harmful. The AI's erroneous recommendations could directly lead to injury or harm to health, fulfilling the criteria for an AI Incident. The article describes realized harm risks from the AI's use, not just potential future harm, and thus it qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

En cas d'urgence médicale, ChatGPT se trompe une fois sur deux

2026-03-11
24heures
Why's our monitor labelling this an incident or hazard?
ChatGPT is an AI system (a large language model) used to provide medical advice. The study shows that its use leads to incorrect recommendations in nearly half of emergency cases, which can directly cause harm to users' health by delaying necessary urgent care. The AI's confident but inaccurate responses create a risk of injury or harm to persons relying on it. Since the harm is realized (incorrect advice given) and linked directly to the AI system's outputs, this meets the criteria for an AI Incident involving injury or harm to health.
Thumbnail Image

Santé : pourquoi ChatGPT ne remplace pas un avis médical

2026-03-13
Linfo.re
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (ChatGPT) in a medical context, where its outputs have been assessed and found to be imperfect and potentially harmful if relied upon for medical decisions. However, the article reports on the study's findings and the caution advised by experts rather than describing a specific incident where harm occurred due to ChatGPT's advice. There is no direct or indirect harm reported as having happened, only a recognition of potential risks. Therefore, this qualifies as Complementary Information, providing context and updates on AI system limitations and expert recommendations, rather than an AI Incident or Hazard.
Thumbnail Image

Quel crédit accorder à ChatGPT sur les questions d'urgences médicales ? | TF1 Info

2026-03-14
TF1 INFO
Why's our monitor labelling this an incident or hazard?
ChatGPT is an AI system used here for medical advice. The study shows that its use can lead to underestimation of serious medical conditions, which could indirectly cause harm to patients if they follow its advice instead of seeking immediate medical attention. This constitutes an AI Incident because the AI system's use has directly or indirectly led to potential harm to health, fulfilling the criteria for harm (a) under the AI Incident definition. The article describes realized inaccuracies and risks, not just potential future harm, so it is not merely a hazard or complementary information.