ChatGPT Gives Inaccurate and Potentially Dangerous Medication Advice, Study Finds

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A study by Long Island University pharmacists found that nearly three-quarters of ChatGPT's responses to medication questions were incomplete or incorrect, sometimes missing dangerous drug interactions or generating false references. These inaccuracies pose a risk of harm to patients relying on the AI for medical information.[AI generated]

Why's our monitor labelling this an incident or hazard?

ChatGPT is an AI system generating medical information. The study shows that its use has led to the dissemination of incorrect and potentially dangerous medical advice, which constitutes harm to health (a). The AI's malfunction or limitations in providing accurate medical information have directly contributed to this risk. Therefore, this qualifies as an AI Incident due to realized harm from the AI system's outputs in a critical domain like healthcare.[AI generated]
AI principles
AccountabilityHuman wellbeingRobustness & digital securitySafetyTransparency & explainability

Industries
Healthcare, drugs, and biotechnologyConsumer services

Affected stakeholders
Consumers

Harm types
Physical (injury)Physical (death)

Severity
AI incident

Business function:
Citizen/customer service

AI system task:
Interaction support/chatbotsContent generation


Articles about this incident or hazard

Thumbnail Image

ChatGPT hace recomendaciones médicas y seguirlas es todo un riesgo mortal

2023-12-06
infobae
Why's our monitor labelling this an incident or hazard?
ChatGPT is an AI system generating medical information. The study shows that its use has led to the dissemination of incorrect and potentially dangerous medical advice, which constitutes harm to health (a). The AI's malfunction or limitations in providing accurate medical information have directly contributed to this risk. Therefore, this qualifies as an AI Incident due to realized harm from the AI system's outputs in a critical domain like healthcare.
Thumbnail Image

Qué no deberías pedirle o preguntarle a ChatGPT

2023-12-07
EL UNIVERSO
Why's our monitor labelling this an incident or hazard?
The article highlights potential harms related to ChatGPT's use, such as privacy breaches and misinformation, but these are presented as risks or findings from studies rather than a concrete incident causing harm. There is no direct or indirect harm described as having occurred due to the AI system's malfunction or misuse in a specific event. The content mainly serves to inform and caution users and stakeholders about AI limitations and risks, fitting the definition of Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

Farmacéuticos aseguran que ChatGPT responde erróneamente preguntas sobre drogas

2023-12-06
Todo Noticias
Why's our monitor labelling this an incident or hazard?
An AI system (ChatGPT) is explicitly involved, and its use in medical information provision is assessed. The study finds that ChatGPT frequently gives incorrect or incomplete answers about drugs, which could plausibly lead to harm if users rely on it for medical decisions. However, the article does not report any realized harm or incidents caused by ChatGPT's responses. Therefore, this event fits the definition of an AI Hazard, as it identifies credible risks of harm from the AI system's use but does not document actual harm occurring.
Thumbnail Image

Cuidado con confiar en ChatGPT sobre qué medicamentos tomar

2023-12-05
Hipertextual
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (ChatGPT) whose outputs have directly led to misinformation about medication interactions and health advice, posing a risk of harm to patients' health. The study documents actual inaccuracies and false information generated by the AI, which constitutes an AI Incident under the definition of harm to health caused directly or indirectly by the AI system's use. The article also references warnings from health professionals and the WHO about the risks of relying on such AI tools for medical information, reinforcing the assessment of realized harm potential.
Thumbnail Image

ChatGPT hace recomendaciones médicas y seguirlas es todo un riesgo mortal

2023-12-06
Noticias SIN
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (ChatGPT) providing medical advice, which is a use of AI. The study shows that the AI's incorrect or incomplete responses could directly or indirectly lead to harm to people's health if followed, fulfilling the criteria for an AI Incident. The harm is related to injury or harm to health due to misinformation about medication interactions and effects. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information, as the harm potential is realized through the AI's unreliable outputs.
Thumbnail Image

ChatGPT: ¡cuidado con seguir recomendaciones médicas de la IA!

2023-12-06
Panamericana Televisión
Why's our monitor labelling this an incident or hazard?
ChatGPT is an AI system used to generate responses to user queries. The study shows that its medical advice is often inaccurate or incomplete, which can indirectly lead to harm to users' health if they follow such advice. The fabrication of sources further undermines trust and can contribute to misinformation. Although no specific injury is reported, the potential for harm to health is clear and realized through the dissemination of unreliable medical information. Therefore, this qualifies as an AI Incident due to indirect harm to health caused by the AI system's outputs.
Thumbnail Image

ChatGPT hace recomendaciones médicas y seguirlas es todo un riesgo mortal

2023-12-06
Noticias de Bariloche
Why's our monitor labelling this an incident or hazard?
ChatGPT is an AI system generating medical advice that has been empirically shown to contain significant inaccuracies and fabricated references. The article highlights that users following such advice could suffer injury or harm to their health, fulfilling the criteria for an AI Incident. The harm is direct and realized because the misinformation is present and could lead to fatal outcomes if acted upon. Therefore, this event qualifies as an AI Incident due to the AI system's use leading to health risks and potential injury.
Thumbnail Image

Faites attention, ChatGPT fournit des réponses inexactes aux questions sur les médicaments

2023-12-07
Yahoo actualités
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (ChatGPT) whose outputs (answers to medication questions) have directly led to potential harm to patients' health due to inaccuracies and omissions, such as failing to identify dangerous drug interactions. This constitutes an AI Incident because the AI system's use has directly or indirectly led to harm or risk of harm to persons, fulfilling the criteria for injury or harm to health (a).
Thumbnail Image

Non, ChatGPT ne donne pas de bonnes informations sur les médicaments

2023-12-09
Doctissimo
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (ChatGPT) whose outputs on medication information were found to be largely inaccurate, posing potential harm to patient health if used without proper human verification. Although no specific harm incident is reported as having occurred, the study highlights a significant risk of harm due to misinformation about drug interactions and effects. Therefore, this qualifies as an AI Hazard because the AI system's use could plausibly lead to harm to health if relied upon improperly. The article emphasizes the need for human control and verification to mitigate this risk.
Thumbnail Image

ChatGPT fournit des réponses inexactes aux questions sur les médicaments~? et dans certains cas des réponses inexactes qui pourraient mettre les patients en danger, selon une étude

2023-12-06
Developpez.com
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (ChatGPT) used to answer medication questions. The AI's outputs were found to be inaccurate or incomplete in a majority of cases, including a specific example where it failed to identify a harmful drug interaction. This misinformation could directly lead to injury or harm to patients, fulfilling the criteria for harm to health (a). The AI system's use is the direct cause of the misinformation, and thus the event qualifies as an AI Incident rather than a hazard or complementary information. The study's findings demonstrate realized harm potential, not just plausible future harm, and the event is not merely a governance or research update but a report of actual AI system failure with health risks.
Thumbnail Image

Médicament : ChatGPT fournit des réponses inexactes aux questions sur les traitements

2023-12-06
Pourquoi docteur
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (ChatGPT) providing medical information that is often incorrect or incomplete, which could plausibly lead to harm to patients if acted upon. Although no direct harm is reported, the demonstrated inaccuracies represent a credible risk of injury or harm to health, fitting the definition of an AI Hazard. The event is not merely general AI news or a complementary update but highlights a plausible risk from AI use in healthcare information.
Thumbnail Image

2023-12-05
News 24
Why's our monitor labelling this an incident or hazard?
ChatGPT is an AI system (a large language model chatbot) whose use in answering medical questions is directly linked to potential harm to patients if inaccurate or incomplete information is provided. The study documents that many responses were unsatisfactory, implying a real risk of harm if users rely on these answers without verification. Although no specific incident of harm is reported, the potential for harm to health is clearly plausible and credible given the context. Therefore, this event qualifies as an AI Hazard because the AI system's use could plausibly lead to harm to health, but no actual harm is documented in the article.
Thumbnail Image

2023-12-05
News 24
Why's our monitor labelling this an incident or hazard?
ChatGPT is an AI system (a large language model chatbot) used to generate responses to user queries. The study found that many responses related to medications were inaccurate or incomplete, which could directly or indirectly lead to harm to patients' health if they rely on this information for medical decisions. Although no specific harm event is reported, the potential for harm is clearly articulated and plausible given the context. Therefore, this situation qualifies as an AI Hazard because the AI system's use could plausibly lead to harm, but no actual harm incident is described in the article.
Thumbnail Image

0

2023-12-07
developpez.net
Why's our monitor labelling this an incident or hazard?
ChatGPT is an AI system generating medical information responses. The study shows that its outputs are often inaccurate or incomplete, including false references, which could mislead patients or healthcare professionals. The example of a missed drug interaction that could cause harmful side effects demonstrates a direct link to potential patient harm. Since the AI system's use has already resulted in misinformation that could endanger health, this qualifies as an AI Incident under the definition of harm to health caused directly or indirectly by the AI system's use.
Thumbnail Image

Ne posez surtout pas de questions sur les médicaments à ChatGPT gratuit !

2023-12-07
LEBIGDATA.FR
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (ChatGPT free version) whose outputs have directly led to misinformation about medication interactions, posing potential harm to users' health. The misinformation about the interaction between Paxlovid and verapamil could cause dangerous health outcomes if acted upon. This fits the definition of an AI Incident because the AI system's use has directly led to harm to health (a).
Thumbnail Image

ChatGPT limited in producing environmental justice information on rural counties, finds US study

2023-12-16
Economic Times
Why's our monitor labelling this an incident or hazard?
The article discusses the use of an AI system (ChatGPT, a large language model) and its limitations in providing equitable information across geographic locations. However, it does not report any direct or indirect harm resulting from the AI's use, nor does it describe an event where harm occurred or was narrowly avoided. The focus is on research findings about bias and potential future implications, which aligns with providing complementary information about AI system capabilities and limitations rather than an incident or hazard. Therefore, this is best classified as Complementary Information.
Thumbnail Image

ChatGPT limited in producing environmental justice information on rural counties, finds US study

2023-12-16
Hindustan Times
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) and its use in generating information. However, the article does not describe any realized harm or injury resulting from the AI's outputs, nor does it report any direct or indirect harm caused by the AI system. Instead, it discusses potential biases and limitations that could affect the reliability of information, which is a form of complementary knowledge about AI capabilities and challenges. Therefore, this is best classified as Complementary Information, as it provides insights into AI system limitations and potential areas for improvement without reporting an AI Incident or AI Hazard.
Thumbnail Image

Stop Using ChatGPT as a Substitute for Web Search

2023-12-15
The How-To Geek
Why's our monitor labelling this an incident or hazard?
The article focuses on general commentary about AI chatbots' reliability and the importance of verifying information rather than reporting a concrete AI Incident or AI Hazard. There is no description of a specific harm caused by an AI system, nor a credible risk of harm from a particular event. The content serves as complementary information to understand AI chatbot limitations and user caution, fitting the definition of Complementary Information rather than Incident or Hazard.
Thumbnail Image

ChatGPT achieves the pinnacle of human intelligence, laziness, developers are baffled

2023-12-12
TechSpot
Why's our monitor labelling this an incident or hazard?
The article discusses a peculiar behavior of an AI system refusing to perform tasks, which is unusual but does not directly or indirectly lead to any of the defined harms (injury, rights violations, disruption, property or community harm). The developers are aware and working on a fix, indicating ongoing improvement. There is no evidence or plausible scenario presented that this behavior could lead to significant harm. Therefore, this is best classified as Complementary Information, as it provides context and updates about AI system behavior and developer responses without constituting an incident or hazard.
Thumbnail Image

Geographic biases exist in ChatGPT, reveal researchers

2023-12-16
Telangana Today
Why's our monitor labelling this an incident or hazard?
The article discusses the discovery of geographic biases in ChatGPT's outputs, which is an AI system, based on research testing its performance. However, it does not describe any realized harm such as injury, rights violations, or community harm resulting from these biases. Instead, it highlights potential limitations and the need for further study and mitigation. Therefore, this event is best classified as Complementary Information, as it provides important contextual and research insights about AI system limitations without reporting an AI Incident or AI Hazard.
Thumbnail Image

Latest News | ChatGPT Limited in Producing Environmental Justice Information on Rural Counties, Finds US Study | LatestLY

2023-12-16
LatestLY
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT, a large language model) and its use in generating information. However, the article does not describe any direct or indirect harm resulting from the AI's outputs, nor does it report an incident where harm occurred. The findings reveal limitations and biases that could plausibly lead to harm if unaddressed, but the article frames this as a research finding and a call for further study and mitigation, not as an incident or immediate hazard. Therefore, this is best classified as Complementary Information, providing context and understanding about AI system limitations and potential future risks without reporting an actual incident or hazard.
Thumbnail Image

Strange theory claims OpenAI's ChatGPT is 'seasonally depressed'

2023-12-15
NewsBytes
Why's our monitor labelling this an incident or hazard?
The article centers on user perceptions and speculative theories about ChatGPT's behavior, with no indication that the AI system's development, use, or malfunction has led to any harm or violation of rights. OpenAI's statement clarifies no intentional update or malfunction causing harm. The discussion is about possible explanations for observed behavior, which is uncertain and not linked to any incident or hazard. Therefore, this is Complementary Information providing context and updates on AI system behavior without reporting an AI Incident or AI Hazard.
Thumbnail Image

Geographic biases exist in ChatGPT, reveal researchers

2023-12-16
Social News XYZ
Why's our monitor labelling this an incident or hazard?
The article discusses the discovery of geographic biases in ChatGPT, a generative AI system, which is an important insight into AI limitations. However, there is no indication that these biases have directly or indirectly caused harm to individuals, communities, or rights, nor that they have plausibly led to harm. The research is exploratory and aimed at improving AI fairness and accuracy. Therefore, this event does not meet the criteria for an AI Incident or AI Hazard. It is best classified as Complementary Information because it provides supporting data and context about AI system performance and potential areas for improvement.
Thumbnail Image

Researchers use environmental justice questions to reveal geographic biases in ChatGPT

2023-12-16
Tech Xplore
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) and its use in generating information. The researchers found geographic biases in the AI's outputs, which could lead to informational disparities. However, the article does not report any direct or indirect harm occurring due to these biases, nor does it describe an incident where harm has materialized. The focus is on identifying limitations and potential biases, which is a form of complementary information that supports understanding and improving AI systems. Therefore, this is best classified as Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

ChatGPT found by study to spread inaccuracies when answering medication questions - 1010 WCSI

2023-12-14
1010 WCSI
Why's our monitor labelling this an incident or hazard?
ChatGPT is an AI system involved in generating medication-related responses. The study found that many responses were inaccurate or incomplete, including false drug interaction information, which could lead to patient harm if acted upon. This constitutes indirect harm to health due to misinformation from the AI system's outputs. Therefore, this qualifies as an AI Incident because the AI system's use has directly or indirectly led to harm to health (harm category a).
Thumbnail Image

AI Experiencing Mental Health Issues? Theory Claims That ChatGPT Could Be Experiencing Winter Blues As the Bot Refuses Some Commands

2023-12-15
Science Times
Why's our monitor labelling this an incident or hazard?
The article centers on anecdotal observations and speculative theories about ChatGPT's behavior changes, without evidence of harm or incidents resulting from these behaviors. The AI system's unpredictable responses are noted, but no direct or indirect harm to people, infrastructure, rights, property, or communities is reported. The discussion is primarily about the AI's performance characteristics and user perceptions, with no materialized incident or credible hazard of harm. Hence, this is best classified as Complementary Information providing context and updates on AI system behavior and user reactions, rather than an AI Incident or AI Hazard.
Thumbnail Image

ChatGPT Reveals Geographic Biases in Environmental Justice Information

2023-12-16
Cryptopolitan
Why's our monitor labelling this an incident or hazard?
The article describes a study analyzing ChatGPT's performance and revealing biases in its information provision, but it does not report any direct or indirect harm caused by the AI system. There is no indication of injury, rights violations, or other harms occurring due to the AI's outputs. Instead, the focus is on understanding limitations and advocating for improvements, which aligns with providing complementary information about AI capabilities and challenges rather than an incident or hazard.
Thumbnail Image

ChatGPT Shows Geographic Bias, Environmental Justice Study Finds

2023-12-15
Mirage News
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) and its use in generating information about environmental justice. The study finds geographic bias, which is a limitation and a potential source of misinformation or unequal access to information. However, the article does not document any actual harm or incident caused by this bias, only the identification of a limitation and the potential for future harm if unaddressed. Therefore, it does not meet the criteria for an AI Incident or AI Hazard. The article primarily provides complementary information about AI system performance and research findings, which fits the definition of Complementary Information.
Thumbnail Image

Researchers use environmental justice questions to reveal geographic biases

2023-12-15
Scienmag: Latest Science and Health News
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (ChatGPT) and its development and use in providing information. However, the article does not describe any direct or indirect harm resulting from the AI system's outputs. Instead, it discusses potential biases and limitations that could inform future improvements. There is no indication of injury, rights violations, disruption, or other harms occurring or imminent. Therefore, this is best classified as Complementary Information, as it provides supporting data and context about AI system limitations and potential biases without reporting an AI Incident or AI Hazard.
Thumbnail Image

Geographic biases exist in ChatGPT, reveal researchers - Weekly Voice

2023-12-16
Weekly Voice
Why's our monitor labelling this an incident or hazard?
The article discusses the use of an AI system (ChatGPT) and its limitations in providing accurate, location-specific information, which is a form of bias in AI outputs. However, there is no indication that these biases have directly or indirectly caused harm such as injury, rights violations, or disruption. The event is about identifying potential issues and improving AI fairness, which aligns with providing complementary information about AI system performance and development rather than reporting an incident or hazard involving harm or plausible harm.
Thumbnail Image

ChatGPT found by study to spread inaccuracies when answering medication questions. Nearly 75% of drug-related responses from ChatGPT were incomplete or

2023-12-14
Bollyinside - Breaking & latest News worldwide
Why's our monitor labelling this an incident or hazard?
ChatGPT is an AI system generating medical information. The inaccuracies and fabricated citations directly relate to the AI's outputs and could cause harm to users' health if acted upon. This constitutes an AI Incident due to the direct link between the AI's use and potential injury or harm to health.
Thumbnail Image

ChatGPT Shows Poor Performance in Answering Drug-Related Questions - Drugs.com MedNews

2023-12-12
Drugs.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (ChatGPT) and its performance in providing medical information. However, the article does not report any actual harm or injury resulting from ChatGPT's responses, only the potential risk if users rely on inaccurate information. Since no direct or indirect harm has been reported, and the focus is on the evaluation and cautionary advice, this constitutes Complementary Information rather than an AI Incident or Hazard.
Thumbnail Image

ChatGPT struggles to answer medical questions, new research finds | CNN

2023-12-10
CNN
Why's our monitor labelling this an incident or hazard?
ChatGPT is an AI system used to generate responses to medical questions. The study found that its inaccurate answers and fabricated citations could directly cause harm to patients if relied upon, such as dangerous drug interactions and incorrect dosing leading to withdrawal symptoms or other health issues. This constitutes injury or harm to health (harm category a) caused by the AI system's use. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to realized harm risks, not just potential harm.
Thumbnail Image

ChatGPT asked to answer medical questions. Check results

2023-12-10
Economic Times
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (ChatGPT) and its use in answering medical questions, showing that it often provides incomplete or inaccurate answers. However, there is no report of actual harm resulting from these inaccuracies, nor is there a direct or indirect link to injury, rights violations, or other harms. The research findings and the EU regulatory developments are informative and contextual, focusing on understanding AI limitations and governance rather than describing an incident or hazard. Therefore, the event is Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

ChatGPT found by study to spread inaccuracies when answering medication questions

2023-12-14
Fox News
Why's our monitor labelling this an incident or hazard?
ChatGPT is an AI system explicitly mentioned as providing medication-related answers. The study found that a significant portion of its responses were inaccurate or incomplete, including false drug interaction information that could cause harm to patients. This misinformation directly relates to potential injury or harm to health, fulfilling the criteria for an AI Incident. The article documents realized harm in the form of inaccurate outputs that could endanger patients, not merely a potential risk. Hence, the event is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

ChatGPT is injurious to health: Why you should not take medical advice from OpenAI's chatbot

2023-12-11
Firstpost
Why's our monitor labelling this an incident or hazard?
ChatGPT is an AI system used for generating responses to user queries, including medical questions. The study shows that its inaccurate or incomplete answers about medications can directly cause harm to users' health if they act on this misinformation. The event involves the use of an AI system leading to realized or highly plausible harm to health, fitting the definition of an AI Incident. The article also notes OpenAI's warnings against using ChatGPT for medical advice, but the harm risk is present due to actual inaccurate outputs and user reliance.
Thumbnail Image

Is artificial intelligence good for medical advice?

2023-12-11
Deseret News
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (ChatGPT) used for medical advice, where its use has directly led to harmful or potentially harmful outcomes, such as dangerous drug interaction misinformation and dosage errors. These constitute injury or harm to health, fulfilling the criteria for an AI Incident. The harm is realized or highly plausible given the dangerous nature of the misinformation. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

ChatGPT shows poor performance in answering drug-related questions

2023-12-12
Medical Xpress - Medical and Health News
Why's our monitor labelling this an incident or hazard?
ChatGPT is an AI system providing natural language responses. The study shows that its use in medication-related queries leads to a significant proportion of inaccurate or incomplete answers, which could indirectly cause harm to patients or healthcare decisions if relied upon without verification. This constitutes an AI Incident because the AI system's use has directly or indirectly led to potential harm to health by providing misleading or incorrect drug information.
Thumbnail Image

Can ChatGPT Answer Your Medical Questions? A Recent Study Found This...

2023-12-11
Thehealthsite.com
Why's our monitor labelling this an incident or hazard?
ChatGPT is an AI system based on a large language model. The study shows that its use in answering medical questions has directly led to the dissemination of inaccurate and potentially harmful medical information, which can cause injury or harm to health (harm category a). The AI system's outputs were misleading and sometimes dangerous, indicating a malfunction or misuse in this context. Therefore, this qualifies as an AI Incident due to realized harm linked to the AI system's use.
Thumbnail Image

Chat GPT not accurate at providing medication info, research says

2023-12-11
WFTS
Why's our monitor labelling this an incident or hazard?
ChatGPT is an AI system used to generate responses to medication questions. The study shows that its use has directly led to the dissemination of inaccurate medical information, which poses a risk of harm to patients' health if acted upon. This constitutes an AI Incident because the AI system's outputs have directly or indirectly led to potential injury or harm to health. The article reports realized inaccuracies and potential harm, not just a hypothetical risk, so it is not merely a hazard or complementary information.
Thumbnail Image

ChatGPT struggles to accurately answer medical questions, study says

2023-12-11
https://www.wsaw.com
Why's our monitor labelling this an incident or hazard?
An AI system (ChatGPT) was used to answer medical questions, and its inaccurate or incomplete responses, including fabricated citations, pose a risk of harm to users' health if relied upon. Although no direct harm is reported, the potential for harm through misuse or overreliance on the AI's outputs is credible and plausible. Therefore, this qualifies as an AI Hazard due to the plausible risk of harm to health from the AI system's use in medical advice contexts.
Thumbnail Image

ChatGPT provides inaccurate and incomplete information about drugs

2023-12-13
Knowridge Science Report
Why's our monitor labelling this an incident or hazard?
An AI system (ChatGPT) is involved, and its use in answering drug-related questions is evaluated. The study reveals that many responses are inaccurate or incomplete, which could plausibly lead to harm if users act on incorrect medical advice. However, no direct or indirect harm has been reported as having occurred yet. Therefore, this event fits the definition of an AI Hazard, as it highlights a credible risk of harm from the AI system's use in healthcare information provision.
Thumbnail Image

Chat GPT not accurate at providing medication info, research says

2023-12-11
Scripps News
Why's our monitor labelling this an incident or hazard?
ChatGPT is an AI system used to generate responses to medication questions. The study found that its use led to incorrect or incomplete information that could cause harm if acted upon, such as recommending unsafe medication combinations. This constitutes harm to health (a) indirectly caused by the AI system's outputs. Therefore, this event qualifies as an AI Incident because the AI system's use has directly or indirectly led to potential harm to persons due to inaccurate medical information.