ChatGPT Aids in Correct Diagnosis of Rare Disease After Years of Medical Errors

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Phoebe Tesoriere, a 23-year-old from Cardiff, used ChatGPT to input her symptoms after years of misdiagnoses by doctors. The AI suggested hereditary spastic paraplegia, which was later confirmed by genetic testing, highlighting both the potential and limitations of AI in healthcare diagnostics.[AI generated]

Why's our monitor labelling this an incident or hazard?

The AI system (ChatGPT) was used by the patient to analyze her symptoms and suggest possible diagnoses. This use of AI directly contributed to identifying a rare medical condition that had been missed by healthcare professionals, which constitutes harm to health that was ongoing and was mitigated by the AI's involvement. Although the AI did not cause harm, its use was pivotal in addressing a significant health issue. Therefore, this qualifies as an AI Incident because the AI system's use directly led to a positive health outcome by correcting previous diagnostic errors, which involved harm to the patient's health.[AI generated]
Industries
Healthcare, drugs, and biotechnology

Severity
AI incident

AI system task:
Interaction support/chatbotsContent generation


Articles about this incident or hazard

Thumbnail Image

Inteligência artificial: como chatbot descobriu condição rara de mulher após anos de diagnósticos errados - BBC News Brasil

2026-04-13
BBC
Why's our monitor labelling this an incident or hazard?
The AI system (ChatGPT) was used by the patient to analyze her symptoms and suggest possible diagnoses. This use of AI directly contributed to identifying a rare medical condition that had been missed by healthcare professionals, which constitutes harm to health that was ongoing and was mitigated by the AI's involvement. Although the AI did not cause harm, its use was pivotal in addressing a significant health issue. Therefore, this qualifies as an AI Incident because the AI system's use directly led to a positive health outcome by correcting previous diagnostic errors, which involved harm to the patient's health.
Thumbnail Image

Como chatbot de IA descobriu condição rara de mulher após anos de diagnósticos errados

2026-04-13
Terra
Why's our monitor labelling this an incident or hazard?
The AI system (ChatGPT) was explicitly involved as a tool used by the patient to input symptoms and receive diagnostic suggestions. The AI's involvement was in its use, not malfunction or development. The AI system's output directly influenced the patient's medical care by prompting further medical investigation and genetic testing, which confirmed the diagnosis. This addresses prior harm caused by misdiagnoses and medical neglect, thus the AI system played a pivotal role in mitigating health harm. The event involves realized harm (years of misdiagnosis and health deterioration) and the AI's role was crucial in correcting this, fitting the definition of an AI Incident.
Thumbnail Image

Como chatbot de IA descobriu condição rara de mulher após anos de diagnósticos errados

2026-04-13
Correio Braziliense
Why's our monitor labelling this an incident or hazard?
The AI system (ChatGPT) was explicitly involved as a tool used by the patient to input symptoms and receive diagnostic suggestions. The use of the AI system led indirectly to the identification of a rare condition that had been missed by medical professionals, which had caused harm to the patient's health and livelihood. The harm includes physical injury, misdiagnosis, and impact on employment, fitting the definition of injury or harm to a person. The AI system's role was pivotal in uncovering the correct diagnosis after years of incorrect medical treatment. Although the AI did not cause the harm, its use was instrumental in remedying it, which fits the criteria for an AI Incident due to indirect contribution to harm resolution.
Thumbnail Image

Após anos de diagnósticos médicos errados, jovem descobre doença rara graças...ao ChatGPT

2026-04-10
SIC Notícias
Why's our monitor labelling this an incident or hazard?
The AI system (ChatGPT) was explicitly involved as a tool used by the patient to obtain possible diagnoses. The AI's involvement indirectly led to a positive health outcome by suggesting a rare disease that was later confirmed by medical tests. There is no harm caused by the AI system; rather, it helped correct previous medical errors. The event does not describe any malfunction, misuse, or legal violation related to the AI system. The main focus is on the patient's experience and the broader context of AI use in healthcare, including expert advice on cautious use. Thus, it fits the definition of Complementary Information, providing supporting data and context about AI's impact without describing an AI Incident or AI Hazard.
Thumbnail Image

Jovem descobre condição rara através do ChatGPT (após anos de diagnósticos errados)

2026-04-11
NOTÍCIAS DE COIMBRA
Why's our monitor labelling this an incident or hazard?
The AI system (ChatGPT) was involved in the use phase, providing medical information that led indirectly to the correct diagnosis of a rare condition after years of incorrect diagnoses. The harm (years of misdiagnosis and progression of disease) was not caused by the AI system but preceded its use. The AI system did not malfunction or cause harm; instead, it helped identify the condition. The article also notes expert warnings about the need to confirm AI-provided information with medical professionals, indicating awareness of potential hazards but no realized harm from AI misuse in this case. Therefore, this event does not qualify as an AI Incident or AI Hazard but rather as Complementary Information about AI's role and implications in healthcare.
Thumbnail Image

ChatGPT ajuda jovem a descobrir condição rara após anos de diagnósticos errados

2026-04-14
Olhar Digital - O futuro passa primeiro aqui
Why's our monitor labelling this an incident or hazard?
The AI system (ChatGPT) was explicitly used by the patient to input symptoms and obtain diagnostic suggestions. This use directly influenced the medical process, leading to a confirmed diagnosis of a rare condition after years of incorrect diagnoses. The involvement of the AI system in the development and use phase contributed indirectly to the health outcome, which is a significant health-related event. Although the outcome was positive, the AI system's role was pivotal in resolving a health issue, fitting the definition of an AI Incident. The article also mentions risks but does not describe a new hazard or complementary information primarily; the main event is the AI's role in diagnosis leading to health impact.
Thumbnail Image

Uma jovem de 23 anos passou 4 anos ouvindo de médicos que tinha ansiedade e epilepsia e foi ameaçada de ser tratada como paciente psiquiátrica até que entrou em coma por 3 dias após uma convulsão e ao sair do hospital digitou seus sintomas no ChatGPT que em segundos sugeriu uma doença rara que exames genéticos confirmaram e que nenhum médico tinha sequer considerado

2026-04-14
CPG Click Petróleo e Gás
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (ChatGPT) that processed the patient's symptoms and suggested a rare disease diagnosis that was later confirmed by genetic tests. This use of AI directly influenced the patient's health outcome by correcting years of misdiagnosis and inappropriate treatment. The AI system's involvement is in its use by the patient to generate diagnostic hypotheses, which led to a confirmed diagnosis and cessation of incorrect medication. This fits the definition of an AI Incident because the AI system's use directly led to a health-related outcome (harm correction and improved diagnosis). Although the outcome was positive, the definition of AI Incident includes events where AI use leads to injury or harm or other significant health-related outcomes, including correction of harm or misdiagnosis. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Saiba como chatbot de IA ajudou jovem a descobrir condição rara | CNN Brasil

2026-04-14
CNN Brasil
Why's our monitor labelling this an incident or hazard?
An AI system (ChatGPT) was explicitly involved in the use phase, providing diagnostic suggestions based on symptom input. This use directly contributed to the identification of a rare disease, which is a health-related outcome. Although the AI did not cause harm, the event involves realized health impact (correct diagnosis) mediated by AI assistance. According to the definitions, an AI Incident includes events where AI use directly or indirectly leads to injury or harm to health. Here, the AI use led to a positive health outcome, not harm, so it does not meet the harm criteria for an AI Incident. However, since the AI system's involvement is central and relates to health outcomes, and no harm or plausible harm is described, this event is best classified as Complementary Information, as it provides context on AI's role in healthcare diagnosis and patient experience without describing harm or plausible harm.