ChatGPT Provides Inaccurate and Fabricated Breast Cancer Screening Advice, Raising Health Concerns

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A University of Maryland study found ChatGPT answered 88% of breast cancer screening questions correctly, but sometimes gave inaccurate or fabricated information, including citing fake journal articles. Doctors warn that relying on the AI chatbot for medical advice poses health risks due to its tendency to hallucinate and provide inconsistent responses.[AI generated]

Why's our monitor labelling this an incident or hazard?

ChatGPT is an AI system generating medical advice. The study found that it gave incorrect and fabricated information about cancer screening, which can directly harm users' health if acted upon. The harm is realized because the misinformation is present and could mislead users. The AI system's malfunction (hallucination) and use have directly led to this harm. Hence, this is an AI Incident involving injury or harm to health due to AI-generated misinformation.[AI generated]
AI principles
SafetyRobustness & digital securityTransparency & explainabilityAccountabilityHuman wellbeing

Industries
Healthcare, drugs, and biotechnology

Affected stakeholders
Consumers

Harm types
Physical (injury)Psychological

Severity
AI incident

Business function:
Citizen/customer service

AI system task:
Interaction support/chatbotsContent generation


Articles about this incident or hazard

Thumbnail Image

ChatGPT makes up fake data about cancer, doctors warn

2023-04-04
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
ChatGPT is an AI system generating medical advice. The study found that it gave incorrect and fabricated information about cancer screening, which can directly harm users' health if acted upon. The harm is realized because the misinformation is present and could mislead users. The AI system's malfunction (hallucination) and use have directly led to this harm. Hence, this is an AI Incident involving injury or harm to health due to AI-generated misinformation.
Thumbnail Image

ChatGPT gives wrong advice for cancer patients and experts say 'you're better off with Google'

2023-04-04
Yahoo Sports
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (ChatGPT) used to provide medical advice. The study found that ChatGPT gave wrong and sometimes fabricated advice about breast cancer screening, which could directly harm patients' health if followed. The presence of racial bias further indicates harm related to discrimination. These factors meet the criteria for an AI Incident, as the AI system's use has directly led to harm to health and potential rights violations. The article does not describe mere potential harm but actual observed inaccuracies and risks, so it is not an AI Hazard or Complementary Information.
Thumbnail Image

ChatGPT gives wrong advice for cancer patients

2023-04-04
The Telegraph
Why's our monitor labelling this an incident or hazard?
ChatGPT is an AI system (a large language model) used to answer medical questions. The article reports that it gave wrong and fabricated advice about breast cancer screening, including racial bias, which can directly harm individuals' health by misleading them about important medical decisions. This meets the criteria for an AI Incident because the AI system's use has directly led to harm to health (a). The presence of fabricated sources and inconsistent answers further supports the classification as an incident rather than a hazard or complementary information.
Thumbnail Image

Can ChatGPT Be Used For Breast Cancer Screening Advice? Study Finds 88% Responses To Be Correct, Some Inconsistent

2023-04-05
english
Why's our monitor labelling this an incident or hazard?
ChatGPT is an AI system generating medical advice responses. The study found that while most responses were correct, some were inaccurate or inconsistent, which could indirectly lead to harm if users rely on incorrect health information. This constitutes an AI Incident because the AI system's use has directly or indirectly led to potential harm to health through misinformation. The article reports realized issues with the AI's outputs, not just potential risks, and discusses specific inaccuracies and inconsistencies that could affect patient decisions.
Thumbnail Image

When GPT hallucinates: Doctors warn against using AI as it makes up information about cancer

2023-04-05
Firstpost
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (ChatGPT and Bing AI) whose use has directly led to the dissemination of false or misleading medical information about cancer. This misinformation poses a risk of harm to individuals' health by potentially leading to incorrect medical decisions or delayed treatment. The AI's hallucination of fake journals and inconsistent answers constitutes a malfunction or misuse of the AI system's outputs, directly causing harm through misinformation. Therefore, this qualifies as an AI Incident due to harm to health caused by the AI systems' outputs.
Thumbnail Image

ChatGPT helpful for breast cancer screening advice with certain caveats, new study finds

2023-04-04
Medical Xpress - Medical and Health News
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (ChatGPT) used for health advice, which is an AI system by definition. The study assesses its use and notes inaccuracies and inconsistencies that could plausibly lead to harm if users rely solely on its advice without consulting medical professionals. However, no actual harm or incident is reported. Therefore, the event fits the definition of an AI Hazard, as the AI system's use could plausibly lead to harm (misleading health advice), but no direct or indirect harm has yet occurred according to the article.
Thumbnail Image

ChatGPT provides correct health advice about 88% of the time, study finds

2023-04-05
News-Medical.net
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (ChatGPT) providing health advice. While the AI system's outputs are sometimes inaccurate or outdated, leading to a risk of harm if users rely solely on it, the article does not report any realized harm or incidents resulting from this. Therefore, the situation represents a plausible risk of harm from the AI system's use, fitting the definition of an AI Hazard rather than an AI Incident. The article also discusses ongoing research and efforts to improve the system, but the main focus is on the potential for harm due to inaccuracies in AI-generated health advice.
Thumbnail Image

ChatGPT helpful for breast cancer screening advice with certain caveats, new study finds

2023-04-04
EurekAlert!
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (ChatGPT) providing health advice, which is a use of AI. The study identifies that while the AI generally provides accurate information, it occasionally produces inaccurate or outdated advice, which could potentially lead to harm if users rely solely on it without consulting doctors. However, the article does not report any actual harm occurring from the AI's use; rather, it highlights the potential risks and the necessity of human oversight. Therefore, this situation represents an AI Hazard, as the AI system's use could plausibly lead to harm (e.g., incorrect health decisions based on inaccurate AI advice), but no direct harm has been reported yet.
Thumbnail Image

Can ChatGPT aid in breast cancer screening advice?

2023-04-05
Social News XYZ
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (ChatGPT) in a healthcare context, specifically for breast cancer screening advice. The study identifies instances where the AI system provided inaccurate or outdated information, which could potentially lead to harm if users rely solely on it without consulting doctors. However, the article does not report any actual harm occurring from ChatGPT's use, only the potential for misinformation and ethical concerns. Therefore, this situation represents a plausible risk of harm due to AI use, fitting the definition of an AI Hazard rather than an AI Incident. The article also provides broader context on AI's role and risks in healthcare, but the main focus is on the potential for harm from AI advice inaccuracies.
Thumbnail Image

Doctors Warn Against Using ChatGPT for Medical Advice; AI Chatbot Unreliable, Makes up Health Data

2023-04-05
Science Times
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose use has directly led to the dissemination of inaccurate and fabricated medical information. This misinformation can cause harm to individuals relying on it for health decisions, fulfilling the criteria for injury or harm to health (a). The AI's hallucination behavior and inconsistent responses demonstrate malfunction or misuse in the context of medical advice. Therefore, this qualifies as an AI Incident due to the realized harm from the AI system's outputs leading to potential health risks.
Thumbnail Image

Doctors Have Warned That You Shouldn't Trust Medical Advice Given By ChatGPT - Especially About Cancer - Wonderful Engineering

2023-04-05
Wonderful Engineering
Why's our monitor labelling this an incident or hazard?
ChatGPT is an AI system used to generate medical advice. The study shows that its outputs include false and outdated information, as well as fabricated references, which can misinform users about serious health conditions like cancer. This misinformation can directly or indirectly cause harm to people's health if they rely on it instead of consulting medical professionals. Therefore, this qualifies as an AI Incident due to realized harm risks from the AI system's use in medical advice.
Thumbnail Image

University of Maryland Study Reveals That ChatGPT Proves Beneficial in Breast Cancer Screening Advice with Certain Limitations - Thailand Medical News

2023-04-05
Home - Thailand Medical News
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (ChatGPT) in providing medical advice. The study identifies inaccuracies and inconsistencies in the AI's responses, which could plausibly lead to harm if users rely solely on it without professional consultation. However, the article does not report any realized harm or injury caused by ChatGPT's advice. Therefore, this situation represents a potential risk rather than an actual incident. The main focus is on assessing the AI's performance and discussing the implications for safe use, which aligns with the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

ChatGPT helpful for breast cancer screening advice with certain caveats,

2023-04-04
Scienmag: Latest Science and Health News
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (ChatGPT) providing health advice. The study assesses the AI's performance and notes inaccuracies and inconsistencies that could plausibly lead to harm if users rely on it without consulting medical professionals. Since no actual harm has been reported, but there is a credible risk of future harm from incorrect or outdated medical advice, this qualifies as an AI Hazard. The article primarily discusses the evaluation and potential risks rather than an incident of harm occurring, so it is not an AI Incident. It is more than just complementary information because it highlights plausible future harm from AI use in health advice.
Thumbnail Image

ChatGPT makes up fake data about cancer, doctors warn

2023-04-05
The Frontier Post
Why's our monitor labelling this an incident or hazard?
ChatGPT is an AI system used to generate responses to user queries. The study shows that its outputs included false and misleading medical information, which constitutes harm to health (a). The AI's hallucination behavior and provision of inaccurate advice about cancer screening can directly or indirectly lead to injury or harm to individuals relying on this information. Therefore, this qualifies as an AI Incident due to realized harm from the AI system's use in a health context.
Thumbnail Image

ChatGPT Answers 88% of Breast Cancer Screening Questions Correctly, But Misses Big with Others

2023-04-05
Inside Precision Medicine
Why's our monitor labelling this an incident or hazard?
An AI system (ChatGPT, a large language model) is involved in generating medical advice. The study identifies inaccuracies and inconsistencies in its responses, which could plausibly lead to harm if users rely on them for health decisions. Since no actual harm is reported, but potential harm is credible, this constitutes an AI Hazard rather than an AI Incident. The article focuses on evaluation and potential risks rather than a realized incident or harm.