AI Chatbots Found to Dispense Inaccurate and Potentially Harmful Medical Advice

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Multiple studies led by US and Canadian researchers found that popular AI chatbots, including ChatGPT, Gemini, Grok, and others, frequently provide inaccurate or incomplete medical information. Around half of their responses to health-related queries were problematic, raising concerns about potential harm to users who rely on these AI systems for medical advice.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly involves AI systems (chatbots powered by large language models) whose use has directly led to problematic medical advice that could cause injury or harm to users' health, fulfilling the criteria for an AI Incident. The harm is realized or ongoing, as the chatbots are widely used by adults for health queries, and the study documents a high rate of problematic responses that could mislead users. The event is not merely a potential risk (hazard) or a response/update (complementary information), but a clear case where AI use has caused or is causing harm, justifying classification as an AI Incident.[AI generated]
AI principles
SafetyRobustness & digital security

Industries
Healthcare, drugs, and biotechnology

Affected stakeholders
Consumers

Harm types
Physical (injury)

Severity
AI incident

AI system task:
Interaction support/chatbotsContent generation


Articles about this incident or hazard

Thumbnail Image

Experts sound alarm after AI found to put public health at risk

2026-04-14
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (chatbots powered by large language models) whose use has directly led to problematic medical advice that could cause injury or harm to users' health, fulfilling the criteria for an AI Incident. The harm is realized or ongoing, as the chatbots are widely used by adults for health queries, and the study documents a high rate of problematic responses that could mislead users. The event is not merely a potential risk (hazard) or a response/update (complementary information), but a clear case where AI use has caused or is causing harm, justifying classification as an AI Incident.
Thumbnail Image

AI chatbots often 'hallucinate´ and give inaccurate medical...

2026-04-14
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (chatbots such as ChatGPT and Grok) that generate medical information. The study documents that these AI systems have directly led to the dissemination of inaccurate and misleading medical advice, which can harm public health and communities. This aligns with harm category (d) - harm to communities or public health. The AI systems' outputs are flawed due to hallucinations and biased or incomplete training data, indicating malfunction or misuse in their use phase. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information, as the harm is realized and documented.
Thumbnail Image

We are AI experts. Here are the dangers of using chatbots for medical information

2026-04-14
The Independent
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (chatbots) whose use in providing medical information has been shown to produce a high rate of inaccurate or misleading answers. This can directly or indirectly lead to harm to individuals' health if users act on incorrect information. The article highlights the systemic issue of hallucination and misinformation by AI chatbots in a critical domain (medicine), which constitutes a significant, clearly articulated harm. Therefore, this qualifies as an AI Incident due to realized harm potential and documented problematic outputs affecting health-related information.
Thumbnail Image

AI Chatbots Give Misleading Medical Advice 50% of the Time, Study Finds

2026-04-14
Bloomberg Business
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (chatbots) providing medical advice that is often inaccurate or misleading, which directly risks injury or harm to users' health. The harm is realized or ongoing, as users rely on these chatbots for health guidance. The study highlights the AI systems' outputs as authoritative but flawed, indicating a direct link between AI use and health risks. Therefore, this qualifies as an AI Incident due to harm to health caused by the AI systems' use.
Thumbnail Image

AI chatbots give misleading medical advice 50% of the time, study finds

2026-04-15
The Straits Times
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (chatbots) providing medical advice, which is a direct use of AI. The study finds that about 50% of the advice is problematic, including highly problematic responses, which can lead to injury or harm to users' health. This meets the criteria for an AI Incident because the AI system's use has directly led to harm (or at least significant risk of harm) to people. The harm is realized in the form of misleading medical advice that could cause health issues. Therefore, this event is classified as an AI Incident.
Thumbnail Image

AI chatbots give misleading medical advice 50% of the time, study finds

2026-04-15
ArcaMax
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (generative AI chatbots) providing medical advice. The study finds that about 50% of the advice is problematic, including highly problematic responses, which can mislead users and cause harm to their health. This is a direct harm to persons' health caused by the use of AI systems, meeting the criteria for an AI Incident. The article does not merely warn of potential harm but documents realized problematic outputs and associated risks, thus it is not an AI Hazard or Complementary Information. It is not unrelated because the AI systems and their harmful outputs are central to the event.
Thumbnail Image

Una buena parte de la información médica facilitada por chatbots es inexacta e incompleta

2026-04-15
www.diariolibre.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (chatbots) providing medical information that is often inaccurate or incomplete, which can mislead users and potentially cause harm if followed. The study highlights that 50% of responses were problematic, with some potentially leading to harmful treatments or outcomes. This constitutes harm to health (a), directly linked to the use of AI systems. Hence, it meets the criteria for an AI Incident as the AI systems' outputs have directly led to realized or plausible harm.
Thumbnail Image

AI chatbots give misleading medical advice 50% of the time, study finds

2026-04-15
The Japan Times
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (chatbots) providing medical advice, which is a use of AI. The study finds that about 50% of the advice is problematic, including highly problematic responses, implying a direct link between the AI system's outputs and potential harm to users' health. This meets the criteria for an AI Incident as the AI system's use has directly led to harm to health (a).
Thumbnail Image

Urgent health warning issued for ChatGPT users over 'inaccurate' answers

2026-04-14
Lancashire Telegraph
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (ChatGPT, Grok, Meta AI) providing medical information. The study documents that these AI systems produce a high proportion of inaccurate or fabricated responses, which can mislead users and potentially cause harm to their health. This constitutes harm to persons and communities due to misinformation. The AI systems' outputs are directly linked to this harm, fulfilling the criteria for an AI Incident. The article does not merely warn of potential harm but reports realized problematic outputs and their implications, thus qualifying as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Mucha de la información médica facilitada por chatbots es inexacta e incompleta, alerta estudio

2026-04-15
www.xeu.mx
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (generative AI chatbots) whose use in providing medical information has resulted in a high rate of problematic responses that could cause harm to users' health if acted upon. The study highlights that 50% of responses were problematic, with some potentially leading to harmful outcomes, which fits the definition of an AI Incident due to indirect harm to health caused by the AI systems' outputs. The article does not merely warn of potential harm but documents realized inaccuracies and risks, thus qualifying as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

AI chatbots often 'hallucinate' and give inaccurate medical information - study

2026-04-14
The Irish News
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (chatbots) whose use has directly led to the dissemination of inaccurate medical information, which constitutes harm to health (a). The AI systems' outputs are misleading and potentially dangerous if relied upon for medical advice, fulfilling the criteria for an AI Incident. The study documents realized harm through problematic responses and fabricated citations, not just potential harm. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

AI chatbots give misleading medical advice 50% of the time, study finds

2026-04-15
Eagle-Tribune
Why's our monitor labelling this an incident or hazard?
The chatbots are AI systems providing medical advice, and the study shows that about half of their responses are problematic, including highly problematic ones. This directly relates to harm to health (a), as misleading medical advice can cause injury or harm to individuals relying on it. Therefore, this qualifies as an AI Incident due to the realized harm from the AI systems' outputs.
Thumbnail Image

Cuidado al consultar sobre salud a los chatbots: la mitad de sus respuestas presenta errores o problemas

2026-04-14
Metro
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (chatbots) providing health information, which is explicitly stated. The study shows that half of the responses are problematic, some very problematic, potentially leading to harm if users act on them without professional advice. This constitutes indirect harm to health caused by the AI systems' outputs. The harm is realized in the sense that misinformation is present and could lead to injury or harm to health. Hence, this qualifies as an AI Incident under the framework, as the AI systems' use has directly or indirectly led to harm to health.
Thumbnail Image

Investigación evidencia fallas en Meta, ChatGPT y Grok sobre temas de salud

2026-04-14
Noticias SIN
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (generative AI chatbots) providing medical information that is often inaccurate or incomplete, with a significant portion of responses being problematic enough to potentially cause harm if followed. The harm relates to injury or harm to health (definition a). The AI systems' outputs have directly led to misinformation that could misguide users, constituting realized harm or at least harm that is occurring through dissemination. Hence, this qualifies as an AI Incident rather than a hazard or complementary information, as the harm is materialized and linked to the AI systems' use.
Thumbnail Image

Substantial amount of medical information provided by popular chatbots inaccurate and incomplete

2026-04-14
EurekAlert!
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly identified as generative AI chatbots providing medical information. The study documents that these chatbots produce inaccurate and incomplete answers, some of which could plausibly direct users to ineffective or harmful treatments if followed without professional guidance. This constitutes indirect harm to health through misinformation dissemination. The researchers warn about the risks of continued use without oversight, indicating that harm is already occurring or highly likely. Therefore, this qualifies as an AI Incident due to realized harm (misinformation with potential health consequences) caused by the AI systems' outputs.
Thumbnail Image

AI chatbots often 'hallucinate' and give inaccurate medical information - study

2026-04-14
Basingstoke Gazette
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (chatbots) that generate medical information, which is a clear AI system involvement. The study documents that these systems produce inaccurate and sometimes fabricated information, which could plausibly lead to harm to health (harm category a) if users rely on this misinformation. Since the harm is not reported as having occurred but the risk is credible and significant, this fits the definition of an AI Hazard. The article does not describe a realized incident but warns of potential harm, making AI Hazard the appropriate classification.
Thumbnail Image

Chatbots Often Give Inaccurate, Incomplete Medical Info

2026-04-14
Mirage News
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (generative AI chatbots) whose use has directly led to the dissemination of inaccurate and potentially harmful medical information to users. This misinformation can plausibly cause harm to individuals' health if they act on it without professional advice, fulfilling the criteria for harm to persons. Therefore, this qualifies as an AI Incident because the AI systems' outputs have directly contributed to a significant harm (misinformation leading to potential health risks).
Thumbnail Image

El 50 % de las respuestas médicas de los chats de IA son imprecisas o peligrosas

2026-04-15
Agencia Sinc
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems (generative AI chatbots) whose use has directly led to harm or risk of harm to health by providing inaccurate or dangerous medical advice. The study documents realized harms (imprecise and dangerous responses) and the potential for direct health damage if users follow such advice. This fits the definition of an AI Incident because the AI systems' use has directly led to harm to health and communities through misinformation and potential physical harm. The article does not merely warn of potential harm but reports on actual problematic outputs and their consequences.
Thumbnail Image

Concerns people could get the wrong answers from AI - to medical and health-related questions

2026-04-14
NZCity
Why's our monitor labelling this an incident or hazard?
The involvement of AI chatbots providing health-related information that is often problematic can indirectly lead to harm to individuals' health if they act on incorrect advice. This fits the definition of an AI Incident because the AI system's use has directly or indirectly led to potential harm to people's health through misinformation. The article reports realized problematic outputs, not just potential risks, indicating an incident rather than a hazard or complementary information.
Thumbnail Image

Popular Chatbots Provide Significant Amounts of Inaccurate and Incomplete

2026-04-15
Scienmag: Latest Science and Health News
Why's our monitor labelling this an incident or hazard?
The event involves multiple generative AI chatbots (AI systems) providing medical advice that is often inaccurate or misleading, as demonstrated by the study's findings. This misinformation can cause harm to individuals' health and to communities by spreading false or incomplete medical knowledge. The harm is realized and documented, not merely potential. The AI systems' use in public-facing medical communication and their confident but flawed responses directly contribute to the harm. Hence, the event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Experts Warn Against Relying on AI Chatbots Like ChatGPT, Gemini, and Grok for Medical Advice Due to Inaccuracy Concerns - Internewscast Journal

2026-04-14
Internewscast Journal
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI chatbots (AI systems) providing medical advice that is often inaccurate or misleading, with about 50% of responses being problematic. This misinformation could plausibly lead to injury or harm to users' health if followed, which constitutes harm under the AI Incident definition. The involvement of AI in generating these responses is direct, and the harm is either occurring or highly likely given the nature of medical advice misuse. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Una buena parte de la información médica facilitada por chatbots es inexacta e incompleta - Diario de Santiago. Noticias de Santiago de Compostela y Galicia.

2026-04-14
Diario de Santiago. Noticias de Santiago de Compostela y Galicia.
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly identified as generative chatbots used to provide medical information. The study documents that these AI systems have produced a significant amount of inaccurate and incomplete medical advice, which can mislead users and cause harm to their health if followed. This meets the definition of an AI Incident because the AI systems' use has directly led to harm to health (harm category a). The article does not merely warn of potential harm but reports on actual problematic outputs and risks of misinformation amplification, confirming realized harm rather than just plausible future harm. Hence, the classification as AI Incident is appropriate.
Thumbnail Image

Popular AI Chatbots Can Provide Misleading Medical Information

2026-04-14
Inside Precision Medicine
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (chatbots) whose outputs have been empirically shown to be problematic in the medical domain, potentially causing harm to users' health through misleading advice. The harm is indirect but plausible and significant, as users might make harmful medical choices based on the AI outputs. Since the harm is realized in the sense that the chatbots are currently providing misleading information that could lead to harm, this qualifies as an AI Incident under the definition of harm to health caused directly or indirectly by AI system use.
Thumbnail Image

Medical journal sounds warning about reliability of health information from AI chatbots

2026-04-15
Business Day
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (generative AI chatbots) used by the public for health information. The study demonstrates that these AI systems produce inaccurate or incomplete outputs that could plausibly cause harm if users rely on them for medical decisions without consulting professionals. This constitutes indirect harm to health (a), fulfilling the criteria for an AI Incident. The harm is realized in the sense that the misinformation is currently being provided and could lead to injury or harm if acted upon. The event is not merely a warning or potential risk but documents actual problematic outputs from AI systems in use, thus not an AI Hazard or Complementary Information. It is not unrelated because the AI systems and their outputs are central to the harm discussed.
Thumbnail Image

Medical information presented by chatbots inaccurate, incomplete: Study

2026-04-15
News18
Why's our monitor labelling this an incident or hazard?
The event involves generative AI chatbots (AI systems) providing medical information that is inaccurate and incomplete, which can mislead users and potentially cause harm to their health if acted upon. The study explicitly highlights the risk of harm due to misinformation and problematic responses, fulfilling the criteria for an AI Incident as the AI systems' outputs have directly or indirectly led to potential harm to health. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Un estudio encontró que el 50% de las respuestas médicas de las 5 IAs más usadas son imprecisas o peligrosas

2026-04-15
La Nacion
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (generative AI chatbots) whose use in providing medical advice has directly led to significant risks of harm to health and safety, as evidenced by the study's findings of inaccurate and potentially dangerous responses. The AI systems' outputs have caused or could cause injury or harm to individuals relying on them for health information, fulfilling the criteria for an AI Incident. The harm is realized in the form of misinformation that can lead to harmful health decisions, thus it is not merely a potential hazard but an actual incident.
Thumbnail Image

Estudio: información médica de los chatbots falla más de lo que parece

2026-04-15
Yahoo!
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (generative medical chatbots) whose use has been studied and found to produce a high rate of problematic, potentially harmful medical information. While no specific harm event is reported, the study warns that continued deployment without supervision could plausibly lead to harm to users' health. This fits the definition of an AI Hazard, as the AI systems' use could plausibly lead to injury or harm to persons. The article is not merely general AI news or a product announcement, but a research-based warning about potential harm from AI use in healthcare information dissemination. Therefore, it is classified as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Study Finds AI Chatbots Provide Inaccurate Health Advice

2026-04-15
Rediff.com India Ltd.
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (generative AI chatbots) used for health advice, which is a high-stakes domain affecting human health. The study finds that these AI chatbots provide inaccurate, incomplete, and misleading medical information, with nearly half of responses being problematic and potentially harmful if followed. This constitutes indirect harm to health, as users relying on such advice could be misled into ineffective or harmful treatments. The AI systems' outputs are the direct source of misinformation, fulfilling the criteria for an AI Incident involving harm to health. The event is not merely a potential risk or a governance update but documents realized problematic outputs with plausible harm, thus classifying it as an AI Incident.
Thumbnail Image

Según un estudio, ChatGPT, Gemini y otros bots de IA dan consejos médicos erróneos la mitad de las veces

2026-04-15
Todo Noticias
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (ChatGPT, Gemini, and similar bots) providing medical advice that is often incorrect or misleading, which is a direct consequence of their use. This misinformation can plausibly lead to harm to health, fulfilling the criteria for an AI Incident under harm category (a) injury or harm to health. The study's findings confirm that these AI systems have already produced problematic outputs that could mislead users, constituting realized harm rather than just potential risk. Hence, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Your AI doctor could be wrong half the time

2026-04-15
NewsBytes
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (chatbots) providing medical advice, which is a use of AI. The misleading advice can directly or indirectly cause harm to individuals' health if acted upon, fulfilling the criteria for harm to health. Since the harm is occurring through the use of AI systems giving problematic advice, this qualifies as an AI Incident.
Thumbnail Image

Gran parte de la de información médica proporcionada por la inteligencia artificial es inexacta e incompleta

2026-04-15
La Voz de Galicia
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (generative AI chatbots) providing medical information that is often inaccurate or incomplete. The study explicitly warns that continued deployment without supervision or public education risks amplifying misinformation, which could lead to harmful health outcomes. While no actual harm is reported, the plausible risk of harm from misleading medical advice is clear and credible. Hence, this qualifies as an AI Hazard rather than an AI Incident. The article is not merely general AI news or a complementary update but a focused report on a credible risk arising from AI system use in health information dissemination.
Thumbnail Image

Chatbots de IA dan peores consejos médicos de lo esperado

2026-04-15
Deutsche Welle
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (generative AI chatbots) providing medical advice that is often inaccurate or incomplete, with a substantial portion of responses being problematic enough to potentially cause harm if acted upon. The study explicitly states that such advice could lead to users following ineffective or harmful treatments without professional guidance, which constitutes harm to health. The AI systems' outputs are directly linked to this risk, fulfilling the criteria for an AI Incident. The article does not merely warn about potential future harm but documents realized problematic outputs that have already been produced and could cause harm, thus surpassing the threshold for an AI Hazard. It is not merely complementary information or unrelated news, as the focus is on the harm caused by the AI systems' outputs in a critical domain (health).
Thumbnail Image

Chatbots como ChatGPT, Gemini y Grok fallan en la mitad de respuestas médicas, revela estudio científico

2026-04-15
El Comercio Perú
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (chatbots based on generative AI) providing medical advice that is often inaccurate or incomplete, with half of the responses being problematic. The study highlights that such misinformation could lead to users following harmful or ineffective treatments, which constitutes harm to health (a). The AI systems' outputs have directly contributed to this risk, fulfilling the criteria for an AI Incident. The article does not merely warn about potential future harm but documents realized issues with AI-generated medical information, thus it is not an AI Hazard or Complementary Information. It is not unrelated because the AI systems are central to the event and the harm discussed.
Thumbnail Image

El 50% de las respuestas médicas de los chats de IA son imprecisas o peligrosas

2026-04-15
LaSexta
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (generative AI chatbots) whose use has directly led to the dissemination of inaccurate and potentially dangerous medical information, which can cause harm to users' health. The harm is realized or at least highly probable given the nature of the misinformation and its potential consequences. The study documents these harms and the AI systems' role in causing them, fulfilling the criteria for an AI Incident. The event is not merely a potential risk (hazard) or a complementary update; it reports on actual problematic outputs and their implications for public health.
Thumbnail Image

Los chatbots de IA ofrecen información médica inexacta en el 50% de los casos

2026-04-15
Diario de Sevilla
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (generative AI chatbots) whose use has directly led to the dissemination of inaccurate and potentially harmful medical information, which can cause injury or harm to users' health if acted upon. The study documents that 50% of responses are problematic or dangerous, indicating realized harm rather than just potential risk. The AI systems' outputs are the direct source of misinformation, fulfilling the criteria for an AI Incident under harm to health. The article does not merely warn about potential future harm but reports on existing problematic outputs, confirming the incident classification.
Thumbnail Image

AI chatbots frequently give inaccurate or incomplete health information: Study

2026-04-15
The New Indian Express
Why's our monitor labelling this an incident or hazard?
The event explicitly involves generative AI chatbots, which are AI systems generating content in response to health-related queries. The study documents that these AI systems provide inaccurate and incomplete medical information, which could plausibly lead to harm if users rely on it without professional oversight. The harm relates to injury or harm to health, fitting the definition of an AI Incident. The event reports realized harm potential through problematic responses, not just theoretical risk, and thus is not merely a hazard or complementary information. Hence, the classification as AI Incident is appropriate.
Thumbnail Image

Do you use AI for medical advice, health research? Here's why you should be careful

2026-04-15
The Telegraph
Why's our monitor labelling this an incident or hazard?
The AI chatbots are explicitly involved as the source of inaccurate and misleading medical advice, which can plausibly cause harm to users' health if they follow such advice without professional consultation. The study documents that nearly half of the chatbot responses were problematic, indicating a significant risk of harm. The article also notes that many people are increasingly relying on these AI systems for health information, increasing the likelihood of harm. This meets the criteria for an AI Incident because the AI systems' use has directly or indirectly led to harm to health through misinformation and misleading advice.
Thumbnail Image

ChatGPT, Gemini, and other AI bots give bad medical tips half the time

2026-04-15
Digital Trends
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (chatbots like ChatGPT, Gemini, etc.) providing medical advice, which is an AI system use case. The study reveals that these AI systems have produced inaccurate or misleading health information, which can directly or indirectly cause harm to individuals relying on this advice for medical decisions (harm to health). Therefore, the AI systems' use has led to realized harm in the form of misinformation that could negatively impact health outcomes. This fits the definition of an AI Incident because the AI systems' outputs have directly or indirectly led to harm to people's health. The article does not merely warn about potential harm but documents actual problematic outputs from AI systems in use.
Thumbnail Image

Los chatbots de inteligencia artificial entregan consejos médicos engañosos el 50% del tiempo, según estudio | Diario Financiero

2026-04-15
Diario Financiero
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (chatbots powered by AI) whose use has directly led to harm to health by providing misleading or incorrect medical advice. This constitutes harm to persons' health (a), fulfilling the criteria for an AI Incident. The study documents realized harm through problematic advice, not just potential risk, and the AI systems' outputs are central to the harm described. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Popular chatbots often provide inaccurate and incomplete medical information

2026-04-15
News-Medical.net
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (generative AI chatbots) whose use in providing medical information is shown to produce a high rate of problematic responses that could mislead users and cause harm if followed without professional guidance. This aligns with the definition of an AI Hazard, as the AI systems' outputs could plausibly lead to injury or harm to health. There is no direct evidence in the article that harm has already occurred, but the risk is clearly articulated and credible. Therefore, the event is best classified as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Popular AI chatbots often give problematic health advice: Study

2026-04-15
Daily Sabah
Why's our monitor labelling this an incident or hazard?
The study explicitly tested AI chatbots (AI systems) and found that nearly half of their health-related responses were problematic, including highly problematic misinformation. Since these chatbots are widely used and their misleading advice can directly harm individuals' health decisions and public health, the event meets the criteria for an AI Incident. The harm is realized in the form of misinformation that can injure or harm health (harm category a) and harm communities through misinformation spread (harm category d).
Thumbnail Image

AI chatbots give risky medical advice in half of cases, study finds

2026-04-15
Businessday NG
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (generative AI chatbots) providing medical advice. The study documents that about 50% of the advice is problematic, with nearly 20% highly problematic, which can mislead users and cause harm to their health. This meets the definition of an AI Incident because the AI systems' use has directly or indirectly led to harm to health (a). The harm is realized and significant given the scale of usage and the nature of medical advice. The article does not merely warn of potential harm but reports on actual problematic outputs and their risks, thus it is not an AI Hazard or Complementary Information. It is not unrelated because the AI system's outputs are central to the harm described.
Thumbnail Image

Cuidado con pedirle consejos médicos a ChatGPT o Gemini: un estudio revela que sus respuestas son problemáticas - La Opinión

2026-04-16
La Opinión Digital
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (chatbots like ChatGPT and Gemini) providing medical advice, which is an AI system use case. The study documents that half of the responses are problematic, including dangerous advice that could lead to injury or harm to users' health, fulfilling the harm criterion (a). The AI systems' outputs have directly or indirectly led to health risks, making this an AI Incident rather than a mere hazard or complementary information. The article emphasizes realized harm and risks from the AI systems' use, not just potential future harm or general commentary.
Thumbnail Image

Why using AI chatbots for medical advice could seriously damage your health

2026-04-15
ECR
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (chatbots) used for medical advice, which is a use of AI. The problematic responses from these AI systems can directly or indirectly lead to harm to the health of users who rely on this advice, fulfilling the criteria for an AI Incident. The harm is realized or ongoing as the chatbots are actively providing problematic advice to millions of users, not just a potential risk. Therefore, this event qualifies as an AI Incident due to the direct link between AI system use and harm to health.
Thumbnail Image

IA y salud: chatbots ofrecen información médica inexacta

2026-04-15
La Voz de Michoacán
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (chatbots) providing medical information, where a significant portion of responses are inaccurate or misleading, potentially causing harm to users' health if acted upon. This constitutes indirect harm to health due to the AI systems' outputs. Therefore, this qualifies as an AI Incident because the AI systems' use has directly or indirectly led to harm to persons by disseminating potentially harmful medical advice.
Thumbnail Image

Chatbots' Medical Info Often Inaccurate, Incomplete

2026-04-15
Mirage News
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (generative AI chatbots) whose use has directly led to the dissemination of inaccurate and potentially harmful medical information. This constitutes harm to health (a) because users relying on such misinformation could suffer injury or adverse health outcomes. Although the harm is not described as a specific incident causing injury, the study's findings indicate that the AI systems' outputs are already problematic and pose a real risk of harm. Therefore, this qualifies as an AI Incident due to the realized dissemination of harmful misinformation by AI systems in a public-facing context.
Thumbnail Image

AI Chatbot Responses: Unmasking the Risks in Health Information | Health

2026-04-15
Devdiscourse
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (chatbots) providing medical information, which is a clear AI system use case. The study documents that nearly half of the chatbot responses are problematic, indicating a direct link between the AI system's outputs and misinformation that can harm public health. This meets the criteria for an AI Incident as the AI system's use has directly led to harm (misinformation with health consequences).
Thumbnail Image

AI chatbots provide poor answers to medical questions half the time, study finds

2026-04-15
CIDRAP
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (chatbots) providing medical information, which is an AI system by definition. The study documents that these AI systems have produced inaccurate and incomplete answers, some potentially harmful, thus directly leading to harm to health (harm category a). The harm is realized as users may rely on these poor answers for medical decisions. Therefore, this qualifies as an AI Incident due to the direct link between AI system outputs and harm to health through misinformation.
Thumbnail Image

Riesgos de los chatbots de IA en consejos médicos - Notiulti

2026-04-15
notiulti.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (chatbots) providing medical information. The study shows that these AI systems' outputs are often problematic and could mislead users, potentially causing harm to health. Although no specific harm is reported as having occurred, the findings highlight a credible risk that the AI chatbots' use could lead to health-related harm. Therefore, this qualifies as an AI Hazard because the AI systems' use could plausibly lead to harm, but no actual incident of harm is documented in the article.
Thumbnail Image

Popular AI chatbots frequently gave problematic health advice: Study

2026-04-15
anews
Why's our monitor labelling this an incident or hazard?
The study explicitly tested AI chatbots (AI systems) and found that nearly half of their health-related responses were problematic, including highly problematic misinformation. This misinformation can directly harm users' health by misleading them about medical issues. The AI systems' use in providing health advice is the direct cause of this harm. Hence, the event meets the criteria for an AI Incident due to realized harm from AI system outputs.
Thumbnail Image

AI chatbots give wrong medical advice 50% of the time, study warns

2026-04-15
storyboard18.com
Why's our monitor labelling this an incident or hazard?
The AI chatbots are explicitly identified as the source of misleading medical advice, which can directly harm users' health by providing inaccurate or incomplete information. The study's findings demonstrate that these AI systems' use has already led to problematic outputs, fulfilling the criteria for an AI Incident involving harm to health and communities. The event is not merely a potential risk but documents realized problematic advice, thus qualifying as an incident rather than a hazard or complementary information.
Thumbnail Image

Study reveals AI chatboats give misleading medical advice 50% of the time

2026-04-15
News9live
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (chatbots) providing medical advice that is misleading or unreliable, which can cause harm to health. The study documents that these chatbots often give confident but incorrect answers, which can mislead users and potentially cause injury or harm. This fits the definition of an AI Incident as the AI system's use has directly led to harm or risk of harm to health. The harm is realized or at least clearly occurring given the reliance on these chatbots for medical advice without clinical judgment.