ChatGPT's Misleading Medical Advice Delays Cancer Diagnosis

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Warren Tierney, a 37-year-old father from Ireland, relied on ChatGPT for medical advice about his sore throat. The AI incorrectly reassured him that cancer was unlikely, leading him to delay seeking professional care. Months later, he was diagnosed with late-stage oesophageal cancer, highlighting the risks of using AI for health decisions.[AI generated]

Why's our monitor labelling this an incident or hazard?

ChatGPT is an AI system providing medical advice. The use of ChatGPT's advice led to a delay in diagnosis of a serious illness, which constitutes indirect harm to health. This fits the definition of an AI Incident because the AI system's use directly contributed to harm to a person's health through misleading advice and delayed medical intervention.[AI generated]
AI principles
SafetyRobustness & digital securityTransparency & explainabilityHuman wellbeing

Industries
Healthcare, drugs, and biotechnology

Affected stakeholders
Consumers

Harm types
Physical (injury)

Severity
AI incident

AI system task:
Interaction support/chatbotsContent generation


Articles about this incident or hazard

Thumbnail Image

37-year-old father trusted ChatGPT on a sore throat; months later, doctors revealed a chilling, life-threatening diagnosis

2025-08-28
Economic Times
Why's our monitor labelling this an incident or hazard?
ChatGPT is an AI system providing medical advice. The use of ChatGPT's advice led to a delay in diagnosis of a serious illness, which constitutes indirect harm to health. This fits the definition of an AI Incident because the AI system's use directly contributed to harm to a person's health through misleading advice and delayed medical intervention.
Thumbnail Image

ChatGPT over doctor: Irish man's choice ends in chronic illness diagnosis; What the official warning says - The Times of India

2025-08-29
The Times of India
Why's our monitor labelling this an incident or hazard?
ChatGPT, an AI language model, was used for medical advice, which is explicitly warned against by OpenAI. The man's reliance on ChatGPT's reassurance delayed his seeking professional medical care, leading to a worsened health outcome. This constitutes indirect harm to the health of a person caused by the use of an AI system, fitting the definition of an AI Incident under harm category (a) injury or harm to health. The event involves the use of an AI system and realized harm, so it is classified as an AI Incident.
Thumbnail Image

I asked ChatGPT if my pain was cancer - now I have five years to live

2025-08-27
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
The AI system (ChatGPT) was used by the patient to assess symptoms and received a false negative indication regarding cancer risk. This led to a delay in obtaining proper medical care, which is an indirect causal factor in the harm (late-stage cancer diagnosis with poor prognosis). The harm is to the health of a person, fulfilling the criteria for an AI Incident. The AI's role is pivotal as the patient explicitly relied on its advice, which was incorrect and contributed to the delay. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Dad asked ChatGPT for advice about his sore throat - and was floored by what followed - The Mirror

2025-08-27
Mirror
Why's our monitor labelling this an incident or hazard?
The AI system (ChatGPT) was used in the development and use phase, providing medical advice that was inaccurate or misleading. The user relied on this advice instead of seeking professional medical help, which delayed diagnosis and treatment of a serious health condition, resulting in harm to health. The harm is indirect but clearly linked to the AI system's outputs and the user's reliance on them. The event fits the definition of an AI Incident because it involves harm to a person's health caused directly or indirectly by the use of an AI system.
Thumbnail Image

"Asking ChatGPT About My Symptoms: A Sobering Diagnosis and a New Perspective on Life" - Internewscast Journal

2025-08-27
internewscast.com
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system, ChatGPT, used for medical symptom assessment. The AI's incorrect reassurance that cancer was 'highly unlikely' led the patient to delay seeking professional medical care. This delay contributed to a late-stage cancer diagnosis, which is a serious harm to health. The AI system's role was pivotal in this chain of events, as the patient trusted its outputs and postponed medical attention. This fits the definition of an AI Incident because the AI system's use indirectly led to harm to a person's health. The event is not merely a potential risk or a complementary update but a realized harm linked to AI use.
Thumbnail Image

Irish Man, 37, Relied on ChatGPT for Sore Throat Advice; Later Diagnosed With Stage-4 Cancer - The Logical Indian

2025-08-30
The Logical Indian
Why's our monitor labelling this an incident or hazard?
ChatGPT, an AI language model, was used for medical advice and provided reassurances that cancer was unlikely, which led the user to delay seeking professional healthcare. This delay resulted in a late diagnosis of stage-four esophageal cancer, a serious harm to the individual's health. The AI system's role in providing misleading reassurance and influencing the user's decision-making directly contributed to this harm. Therefore, this qualifies as an AI Incident due to indirect harm to health caused by reliance on AI-generated medical advice.
Thumbnail Image

ChatGPT convinced man his sore throat was harmless, hospital visit revealed aggressive stage-four cancer

2025-08-31
MoneyControl
Why's our monitor labelling this an incident or hazard?
ChatGPT, an AI language model, was used by the individual to assess symptoms. Its reassurance that the sore throat was unlikely to be cancer led to a delay in seeking medical attention. This delay contributed indirectly to harm, as the cancer was diagnosed at an advanced stage, reducing treatment options and survival chances. The AI system's use and its outputs played a pivotal role in the chain of events causing harm to the person's health, fitting the definition of an AI Incident.
Thumbnail Image

ChatGPT said it wasn't cancer, doctors later told Irish man it was stage four

2025-08-31
The Indian Express
Why's our monitor labelling this an incident or hazard?
The AI system (ChatGPT) was used in a health context to assess symptoms and provide advice. Its reassurance that cancer was unlikely led the individual to delay seeking medical care, which contributed to a late diagnosis of stage four cancer. This delay in diagnosis and treatment constitutes harm to the person's health, directly linked to the use of the AI system. Therefore, this qualifies as an AI Incident due to indirect harm caused by reliance on the AI's outputs in a critical health situation.
Thumbnail Image

ChatGPT "Sorun Yok" dedi hayatı tehlikeye girdi: Hastanede dördüncü evre kanser teşhisiyle yıkıldı | Dünya Haberleri

2025-09-09
Yeni Şafak
Why's our monitor labelling this an incident or hazard?
An AI system (ChatGPT) was used for health-related advice, and its output led to a delay in proper medical diagnosis and treatment, causing harm to the individual's health (late-stage cancer diagnosis). This fits the definition of an AI Incident because the AI system's use indirectly led to injury or harm to a person's health.
Thumbnail Image

ChatGPT'ye İnanıp Yaptığı Hata Hayatına Mal Oldu! Tüm Dünya Onu Konuşuyor...

2025-09-09
www.gercekgundem.com
Why's our monitor labelling this an incident or hazard?
ChatGPT is an AI system providing medical information in this case. The individual's reliance on the AI's incorrect or overly reassuring output led to delayed diagnosis of a serious illness, which is harm to health (a). The AI system's use is directly linked to the harm through its misleading advice, fulfilling the criteria for an AI Incident. Although the AI did not malfunction per se, its outputs were relied upon in a way that caused harm. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

İrlanda'da doktor yerine Chat GPT'ye güvendi: Ölmek üzere olduğu ortaya çıktı

2025-09-09
En Son Haber
Why's our monitor labelling this an incident or hazard?
An AI system (ChatGPT) was used for medical symptom assessment, and its output led the user to postpone medical consultation. This delay caused harm to the individual's health (advanced esophageal cancer diagnosis with poor prognosis). The AI system's involvement in the use phase indirectly led to harm to a person, fitting the definition of an AI Incident under harm to health. The event clearly involves an AI system, the harm is realized, and the AI's role is pivotal in the chain of events leading to harm.
Thumbnail Image

Ölmek üzere olan adama ChatGPT öyle bir tavsiye verdi ki - Sözcü Gazetesi

2025-09-09
Sözcü Gazetesi
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (ChatGPT) whose advice was relied upon by a user instead of seeking professional medical diagnosis. The AI's output was inaccurate or misleading, leading to a delay in diagnosis and treatment of a life-threatening condition. This delay constitutes indirect harm to the person's health caused by the AI system's use. Therefore, this qualifies as an AI Incident under the definition of harm to health resulting from the use of an AI system.
Thumbnail Image

Doktor yerine Chat GPT'ye güvendi, ölmek üzere olduğu ortaya çıktı

2025-09-09
Haberler
Why's our monitor labelling this an incident or hazard?
The AI system (ChatGPT) was used for medical symptom assessment and gave a reassuring but incorrect response, which led the user to delay visiting a doctor. This delay resulted in a late-stage cancer diagnosis with poor prognosis, constituting harm to the person's health. The AI's involvement in the use phase directly contributed to this harm, meeting the criteria for an AI Incident under the definition of injury or harm to a person's health caused directly or indirectly by the AI system's use.
Thumbnail Image

ChatGPT "Sorun yok" dedi, ölmek üzere olduğu ortaya çıktı

2025-09-08
NTV
Why's our monitor labelling this an incident or hazard?
ChatGPT, an AI language model, was used by the individual to assess symptoms. The AI's reassurance led to the individual postponing medical consultation, which indirectly caused harm to their health due to delayed diagnosis of a serious condition. This fits the definition of an AI Incident because the AI system's use indirectly led to harm to a person's health. The event is not merely a hazard or complementary information, as the harm has already occurred and is linked to the AI system's use.
Thumbnail Image

Yapay Zeka, Sağlığıma Zarar Verdi

2025-09-09
Haber Aktüel
Why's our monitor labelling this an incident or hazard?
The AI system (ChatGPT) was used for health-related advice and gave an inaccurate or overly reassuring response about the severity of symptoms. This misuse of the AI system's output directly contributed to harm to the individual's health, fulfilling the criteria for an AI Incident under harm category (a) injury or harm to health. The harm is indirect but clearly linked to the AI system's use and its misleading output. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

À 37 ans, il fait confiance à ChaGPT qui le rassure sur son état de santé... Il souffrait en réalité d'un cancer redoutable de stade 4

2025-09-10
Yahoo actualités
Why's our monitor labelling this an incident or hazard?
The AI system (ChatGPT) was used in the patient's health assessment and provided misleading reassurance that no cancer was likely, which led to a delay in seeking appropriate medical care. This delay plausibly worsened the patient's health outcome, constituting indirect harm to the person's health. The event involves the use of an AI system, and the harm (serious health injury due to delayed diagnosis of cancer) has occurred. Therefore, this qualifies as an AI Incident under the definition of injury or harm to a person's health caused directly or indirectly by the use of an AI system.
Thumbnail Image

ChatGPT lui assure qu'il va bien, son médecin diagnostique un cancer de stade 4

2025-09-10
Yahoo actualités
Why's our monitor labelling this an incident or hazard?
The AI system (ChatGPT) was used for medical advice and provided misleading reassurance that cancer was unlikely, which delayed the patient's decision to seek medical attention. This delay contributed indirectly to the harm of late diagnosis of a serious illness, fulfilling the criteria for an AI Incident involving harm to health. The AI system's role is pivotal as its incorrect outputs influenced the patient's actions leading to harm.
Thumbnail Image

" Aucun symptôme alarmant " : ChatGPT le rassure sur sa santé, il souffrait en réalité d'un cancer

2025-09-10
Ouest France
Why's our monitor labelling this an incident or hazard?
The AI system (ChatGPT) was used in the patient's self-diagnosis process and gave an inaccurate medical assessment that cancer was improbable. This misinformation directly contributed to the patient delaying medical consultation, which led to harm to his health (advanced cancer diagnosis and loss of life months). The harm is linked to the AI system's use and its malfunction (incorrect output). Therefore, this qualifies as an AI Incident due to indirect harm to health caused by reliance on the AI's erroneous medical advice.
Thumbnail Image

ChatGPT lui assure que tout va bien, son médecin diagnostique un cancer

2025-09-10
20minutes
Why's our monitor labelling this an incident or hazard?
The AI system (ChatGPT) was explicitly involved in the patient's health assessment. Its outputs directly influenced the patient's decision to delay consulting a real doctor, which indirectly led to harm to the patient's health (a serious cancer diagnosis at an advanced stage). This fits the definition of an AI Incident, as the AI system's use indirectly led to injury or harm to a person. The event is not merely a potential hazard or complementary information, but a realized harm linked to AI use.
Thumbnail Image

ChatGPT lui dit que tout va bien, son médecin lui diagnostique un cancer

2025-09-09
BFMTV
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) providing medical advice that was incorrect and led the user to delay seeking professional medical care. This delay resulted in harm to the user's health, as the cancer was diagnosed at a late, advanced stage. The AI system's use directly contributed to this harm, meeting the definition of an AI Incident due to injury or harm to a person caused by the AI system's outputs and the user's reliance on them.
Thumbnail Image

À cause de ChatGPT, un homme atteint d'un cancer de stade IV a retardé son diagnostic

2025-09-09
RTL.fr
Why's our monitor labelling this an incident or hazard?
ChatGPT is an AI system that generates responses based on input. In this case, its output reassured the patient that cancer was very unlikely, which was incorrect and led to delayed diagnosis and treatment of a serious cancer. This delay constitutes indirect harm to the patient's health, fulfilling the criteria for an AI Incident under the definition of harm to a person due to the use of an AI system. The event involves the use of an AI system, the harm is realized (delay in diagnosis and treatment of cancer), and the AI system's role is pivotal in causing this harm.
Thumbnail Image

"Rien d'alarmant", assure ChatGPT... il s'agissait pourtant d'un cancer de l'œsophage avancé

2025-09-11
Doctissimo
Why's our monitor labelling this an incident or hazard?
The AI system (ChatGPT) was used in the patient's self-diagnosis process and gave misleading reassurance that the symptoms were not serious, which delayed the patient seeking professional medical care. This delay in diagnosis of a serious illness (stage IV esophageal cancer) directly impacted the patient's health outcome, fulfilling the criteria for an AI Incident due to indirect harm caused by the AI's erroneous outputs and the patient's reliance on them. The event involves the use of an AI system, the harm is realized, and the AI's role is pivotal in the chain of events leading to harm.
Thumbnail Image

Il demande à ChatGPT si ses douleurs sont graves, quelques mois plus tard on lui annonce un cancer au stade 4 - Top Santé

2025-09-12
Topsante.com
Why's our monitor labelling this an incident or hazard?
ChatGPT is an AI system that generated medical advice based on the user's symptom descriptions. The AI's incorrect reassurance led the user to delay medical consultation, which indirectly caused harm to his health by postponing diagnosis and treatment of a serious condition. This fits the definition of an AI Incident because the AI system's use directly contributed to harm to a person's health. The harm is realized and significant, involving a late-stage cancer diagnosis with poor prognosis.