AI Chatbots Found to Give Inaccurate and Risky Health Advice

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A large-scale study led by Oxford University found that AI chatbots, including those based on large language models, frequently provide incorrect or inconsistent medical advice, sometimes failing to recognize urgent health needs. The research warns that relying on these chatbots for health guidance can be dangerous and is not superior to traditional sources.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly involves AI systems (large language model chatbots) used for health advice. It documents that these AI systems provide bad advice and wrong diagnoses, which can directly harm users' health. The harm is related to injury or harm to persons, fulfilling the definition of an AI Incident. The study's findings indicate that the AI systems' outputs have led or could lead to health harm, not just a potential hazard but an actual realized risk. Hence, the event is classified as an AI Incident rather than a hazard or complementary information.[AI generated]
AI principles
SafetyHuman wellbeing

Industries
Healthcare, drugs, and biotechnology

Affected stakeholders
Consumers

Harm types
Physical (injury)Physical (death)

Severity
AI incident

Business function:
Citizen/customer service

AI system task:
Interaction support/chatbots


Articles about this incident or hazard

Thumbnail Image

Relying on AI chatbots for health 'dangerous' as they give bad advice

2026-02-09
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (large language model chatbots) used for health advice. It documents that these AI systems provide bad advice and wrong diagnoses, which can directly harm users' health. The harm is related to injury or harm to persons, fulfilling the definition of an AI Incident. The study's findings indicate that the AI systems' outputs have led or could lead to health harm, not just a potential hazard but an actual realized risk. Hence, the event is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

AI chatbots give bad health advice, research finds

2026-02-09
EWN Traffic
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (large language models/chatbots) used in a health context. The research highlights that these AI systems can give incorrect or dangerous health advice, which can directly or indirectly lead to harm to a person's health. Although the article reports on research findings rather than a specific incident, the described harm is realized or at least strongly implied as occurring when people rely on AI chatbots for medical advice. Therefore, this qualifies as an AI Incident due to harm to health caused by the use of AI systems.
Thumbnail Image

AI chatbots give bad health advice, research finds

2026-02-09
Toronto Sun
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (large language models used as chatbots) providing health advice. The research highlights that these AI systems give incorrect diagnoses and fail to identify urgent health issues, which can directly harm patients' health. This constitutes an AI Incident because the AI system's use has directly led to harm or risk of harm to people's health, as per the definition of AI Incident (harm to health of persons).
Thumbnail Image

AI chatbots give bad health advice, research finds

2026-02-10
Bangkok Post
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (large language model chatbots) used by people to seek health advice. The study shows that these AI chatbots often give incorrect or misleading advice, which can lead to harm to users' health. The harm is indirect but plausible and significant, as wrong diagnoses or failure to recognize urgent conditions can cause injury or health deterioration. The event is not merely a potential risk but documents realized harm through poor advice and user misunderstanding, meeting the criteria for an AI Incident under harm to health (a).
Thumbnail Image

AI chatbots no better than traditional sources for health advice: Study

2026-02-09
Daily Sabah
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (large language model chatbots) used for health advice. It discusses the use of these AI systems and their limitations, including the risk of wrong diagnoses and failure to recognize urgent help needs, which could plausibly lead to harm. However, no direct or indirect harm has been reported as having occurred. The study's findings and expert commentary emphasize potential medical risks, making this an AI Hazard rather than an AI Incident. It is not merely complementary information because the main focus is on the potential harm from AI chatbot use in health advice, not on responses or ecosystem updates.
Thumbnail Image

Artificial Intelligence chatbots give bad health advice, reveals new research

2026-02-09
The Gulf Today
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (large language model chatbots) used by people to seek health advice. The study shows that the AI's use has directly led to harm in the form of incorrect or misleading health advice, which can cause injury or harm to individuals' health (harm category a). The AI systems' outputs are a contributing factor to this harm, as they sometimes generate misleading or incorrect responses, and users may misunderstand or misuse the advice. Therefore, this qualifies as an AI Incident because the AI system's use has directly or indirectly led to harm to health.
Thumbnail Image

AI chatbots give bad health advice, research finds

2026-02-09
Digital Journal
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (chatbots powered by large language models) used by people to seek health advice. The study demonstrates that these AI systems do not reliably provide correct diagnoses or appropriate recommendations, which can cause harm to users' health. The harm is realized or highly plausible given the wrong or missed diagnoses and failure to recognize urgent conditions. Therefore, this constitutes an AI Incident as the AI system's use has directly or indirectly led to harm to health.
Thumbnail Image

AI chatbots give bad health advice, research finds | Fox 11 Tri Cities Fox 41 Yakima

2026-02-09
FOX 11 41 Tri Cities Yakima
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (large language model chatbots) used for health advice. The study shows that these AI chatbots do not improve health outcomes compared to traditional methods and can mislead users, potentially causing harm to health. The harm is indirect but materialized, as users may receive wrong diagnoses or fail to seek urgent care based on AI advice. This fits the definition of an AI Incident because the AI system's use has directly or indirectly led to harm to health. The article does not merely warn of potential harm but reports on realized risks and poor outcomes from AI chatbot use in health contexts.
Thumbnail Image

Not a real doctor: AI struggles to treat human patients

2026-02-09
Narooma News
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (large language models) used in medical diagnosis and treatment advice. The study demonstrates that these AI systems have provided incorrect or inconsistent medical guidance, which can lead to injury or harm to health (harm category a). The article explicitly warns that relying on AI chatbots for medical advice can be dangerous and result in wrong diagnoses and failure to recognize urgent medical needs. Therefore, the AI system's use has directly led to harm or risk of harm to people, qualifying this as an AI Incident.
Thumbnail Image

Health Advice From A.I. Chatbots Is Frequently Wrong, Study Shows

2026-02-09
DNYUZ
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (chatbots) providing medical advice that is frequently wrong or inconsistent, which can directly lead to harm to users' health. The study documents that users often make incorrect decisions based on the AI advice, and the AI's failure to reliably guide users constitutes a malfunction or misuse in the use phase. The harm is to the health of individuals relying on these chatbots, fulfilling the criteria for an AI Incident. Although the study is experimental, the harm is realized or highly plausible given the widespread use of these chatbots for health information. Hence, the classification as AI Incident is appropriate.
Thumbnail Image

Using AI for medical advice 'dangerous', Oxford study finds

2026-02-10
BBC
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (large language models) used for medical advice, which is a high-stakes application. The study found that the AI's outputs were often inaccurate and inconsistent, posing risks to patients' health. This aligns with harm category (a) injury or harm to health of persons. The AI's role is pivotal as it provided medical information that could mislead users, potentially causing harm. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Using AI Chatbots To Google Your Symptoms? New Research Says It Can Be Very Dangerous

2026-02-10
News18
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (large language model chatbots) used for medical advice. The research shows that their use has directly led to harm or risk of harm to people's health by providing unreliable and sometimes dangerous guidance. This fits the definition of an AI Incident because the AI system's use has directly or indirectly led to harm to the health of persons. The harm is realized or ongoing as users are relying on these chatbots and receiving misleading information, which can cause injury or harm to health. Therefore, this event is classified as an AI Incident.
Thumbnail Image

Misleading AI Chatbots Putting Lives At Risk

2026-02-10
Forbes
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (large language models/chatbots) used for medical advice, which is a high-stakes domain affecting health. The AI's inconsistent and inaccurate outputs have directly led to risks of harm to patients, fulfilling the criteria for an AI Incident under harm to health. The study's findings demonstrate that reliance on these AI systems can cause injury or harm to people, even if not all users are harmed yet, as the risk is realized in the examples given. The event is not merely a potential hazard or complementary information but documents actual harm caused by AI use in practice.
Thumbnail Image

Urgent warning over using AI for medical decisions as chatbots give inaccurate advice - Manchester Evening News

2026-02-10
Manchester Evening News
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (large language models/chatbots) used for medical advice. The research shows these AI systems can give wrong or inconsistent medical information, which could plausibly lead to harm to users' health if they rely on such advice. Although no specific cases of injury or harm are reported, the warnings and study findings indicate a credible risk of harm from AI use in this sensitive domain. This fits the definition of an AI Hazard, as the AI system's use could plausibly lead to an AI Incident involving harm to health. There is no indication of actual harm having occurred yet, so it is not an AI Incident. The article is not merely complementary information since it focuses on the risk and dangers of AI use in medical advice rather than updates or responses to past incidents.
Thumbnail Image

Could daily AI chatbot use be linked to higher depression symptoms?

2026-02-11
Free Malaysia Today
Why's our monitor labelling this an incident or hazard?
The article involves AI systems (AI chatbots) and their use, but it does not report any direct or indirect harm caused by these systems. The research shows a correlation between AI chatbot use and depressive symptoms but does not establish causation or actual harm caused by the AI. The article also discusses the potential benefits and limitations of AI chatbots for mental health support and urges caution. This fits the definition of Complementary Information, as it provides context and understanding about AI's impact on mental health without describing a specific incident or hazard.
Thumbnail Image

AI chatbots give bad health advice, research finds

2026-02-10
The Manila times
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (chatbots powered by large language models) in providing health advice. The study shows that these AI systems' outputs can lead to incorrect health problem identification and inappropriate actions, which constitutes indirect harm to individuals' health. Although the harm is not from a single incident but from widespread use and potential misuse, the realized harm (wrong diagnoses, failure to recognize urgent help) is clearly articulated. Therefore, this qualifies as an AI Incident due to harm to health caused by the AI system's use.
Thumbnail Image

AI chatbots can't replace doctors yet, warns Oxford study

2026-02-10
NewsBytes
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (LLM chatbots) used for medical advice, which is a high-stakes domain. The study identifies potential risks and challenges in relying on AI for health-related decisions, implying plausible future harm if users depend on AI outputs that mix good and bad information. Since no actual harm or incident has been reported, but there is a credible risk of harm, this qualifies as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Study: Medical Chatbots Not Ready Yet For Patient Care

2026-02-10
Crooks and Liars
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (medical chatbots) whose use in patient care is evaluated. The study finds that these chatbots are not yet reliable and can provide false or inconsistent medical advice, which could plausibly lead to harm to patients' health if used improperly. Since no actual harm is reported but the risk is credible and significant, this fits the definition of an AI Hazard rather than an AI Incident. The article also includes a company response about improvements, but the main focus is on the study's findings about risks and readiness, not on mitigation or governance responses, so it is not Complementary Information.
Thumbnail Image

ChatGPT Is Terrible at Giving Medical Advice, Study Confirms - RELEVANT

2026-02-10
RELEVANT
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses the use of AI chatbots (AI systems) for medical advice and documents their poor performance in realistic health scenarios, which can lead to wrong diagnoses and potentially dangerous outcomes. This constitutes harm to health (a), fulfilling the criteria for an AI Incident. The harm is realized or ongoing as people are actively using these chatbots for medical advice and being misled. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Chatbots Make Terrible Doctors, New Study Finds

2026-02-10
404 Media
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (large language models/chatbots) used for medical advice. The study found that these AI systems, when used by people, often gave wrong or conflicting medical advice, including failure to recommend emergency care in serious cases. This misuse or malfunction of AI systems in a high-stakes health context can directly lead to injury or harm to health, fulfilling the criteria for an AI Incident. The article describes realized harm risks and examples of incorrect advice, not just potential future harm, so it is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Dr. AI not the most reliable source of medical advice, study finds

2026-02-10
ConsumerAffairs
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI systems (large language models/chatbots) used for medical advice that have produced outputs with errors and misleading information, which can directly harm users' health. The AI's involvement is in its use, and the harm is realized or ongoing as users rely on these systems for health guidance. Therefore, this qualifies as an AI Incident due to direct harm to health caused by AI system outputs.
Thumbnail Image

Trusting AI for health issues is risky

2026-02-11
The Sun Malaysia
Why's our monitor labelling this an incident or hazard?
The article centers on the potential dangers and ethical considerations of using AI chatbots for health advice, emphasizing that AI can produce misleading or false information and is not a substitute for professional medical judgment. While it clearly identifies plausible risks of harm, it does not describe any realized harm or a specific event where AI caused injury, rights violations, or other harms. Therefore, it fits the definition of an AI Hazard, as it outlines credible risks that AI use in health diagnosis could plausibly lead to harm, but no actual incident is reported.
Thumbnail Image

Study warns AI chatbots can pose risks when used for medical advice

2026-02-11
Anadolu Ajansı
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (large language models used as chatbots) and discusses their use in medical advice, which directly relates to health outcomes. Although no specific harm has been reported as having occurred yet, the study warns that the use of these AI chatbots could plausibly lead to harm to patients' health due to inaccurate or inconsistent medical information. Therefore, this event fits the definition of an AI Hazard, as it highlights credible risks of harm stemming from the use of AI systems in a sensitive domain like healthcare.
Thumbnail Image

Dr. Google Still Beats Dr. Chatbot: Why AI Fails the Medical Advice Test

2026-02-11
WebProNews
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (large language model chatbots) used for medical advice. It documents direct harms resulting from their use: inaccurate or misleading medical information causing potential health risks to users. The 'authority effect' increases the likelihood of harm by fostering misplaced trust in AI outputs. These harms fall under injury or harm to health (a) and harm to communities (d). The study's findings and examples confirm that the AI systems' use has directly led to these harms, meeting the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Is it Dangerous to Use AI for Medical Advice?

2026-02-11
ITNewsAfrica.com
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (large language models) used for medical advice, and the research identifies that their use can lead to harm to health due to inaccurate or inconsistent information. This constitutes an AI Incident because the AI system's use has directly led to realized harm risks to patients' health, even if the harm is described in terms of potential or actual risk rather than specific cases. The study's findings demonstrate that the AI's outputs can cause injury or harm to health, fulfilling the criteria for an AI Incident rather than a mere hazard or complementary information.
Thumbnail Image

Oxford Study Warns Against Using AI Chatbots for Medical Advice

2026-02-11
News Ghana
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (large language models) used by patients to obtain medical advice. The study shows that these AI systems provide unreliable diagnoses and recommendations, which users struggle to interpret correctly, leading to a higher likelihood of incorrect self-diagnosis and potentially harmful health decisions. This constitutes indirect harm to health caused by the AI systems' outputs and their use, fulfilling the criteria for an AI Incident. The article does not merely warn of potential future harm but documents realized risks and harms from the use of AI chatbots in medical advice.
Thumbnail Image

Oxford Study Warns AI Chatbots Can Give Dangerous Medical Advice - iAfrica.com

2026-02-11
iAfrica
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (large language model chatbots) used for medical advice. The study shows these AI systems can provide inaccurate or inconsistent information that users find hard to distinguish, which could directly lead to injury or harm to health (harm category a). While no actual harm event is described, the warning about dangers and risks to patients indicates a credible potential for harm. This fits the definition of an AI Hazard because the AI system's use could plausibly lead to harm, but no realized harm incident is reported. Therefore, the event is best classified as an AI Hazard.
Thumbnail Image

Health advice from AI chatbots is frequently wrong, study shows

2026-02-12
Economic Times
Why's our monitor labelling this an incident or hazard?
The AI chatbots are explicitly involved as AI systems providing health advice. The study shows that their use has led to incorrect diagnoses and inappropriate recommendations, which can cause injury or harm to individuals relying on this advice. The harm is realized, not just potential, as users have been misled by the AI outputs. The article details direct consequences of AI system use in health contexts, meeting the criteria for an AI Incident rather than a hazard or complementary information. The presence of false or inconsistent advice and the risk of serious health consequences confirm the classification as an AI Incident.
Thumbnail Image

AI Chatbots Are Even Worse at Giving Medical Advice Than We Thought

2026-02-12
Lifehacker
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (LLM chatbots) used for medical advice, which is a clear AI system involvement. The harms described include incorrect diagnoses, unsafe treatment recommendations, and risks to patient care, which constitute injury or harm to health (harm category a). The article provides evidence of realized harm or significant risk of harm due to reliance on these AI outputs by both patients and clinicians. This meets the criteria for an AI Incident because the AI system's use has directly or indirectly led to harm. The article is not merely about potential future harm (hazard) or a governance response (complementary information), but about actual harms and risks already observed and documented.
Thumbnail Image

AI Chatbots Giving 'Dangerous' Medical Advice, Oxford Study Warns - Decrypt

2026-02-12
Decrypt
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (LLMs) used in medical decision-making. The study demonstrates that these AI systems have directly led to harm by providing poor or dangerous medical advice, which can cause injury or harm to individuals' health. The article explicitly warns that relying on these AI chatbots for medical advice is dangerous, indicating realized harm. Therefore, this qualifies as an AI Incident due to harm to health caused by the use of AI systems.
Thumbnail Image

Landmark study reveals AI's dangerous shortcomings in medical advice - NaturalNews.com

2026-02-12
NaturalNews.com
Why's our monitor labelling this an incident or hazard?
The article clearly describes AI systems (large language models) used for medical advice that have caused harm by providing incorrect diagnoses and dangerous recommendations, leading to patients requiring emergency care. This constitutes injury or harm to health (harm category a). The AI's malfunction or inadequacy in this context is central to the harm, fulfilling the criteria for an AI Incident. The study's findings and real-world cases confirm that harm has materialized, not just potential risk, so this is not merely a hazard or complementary information. Hence, the classification as AI Incident is appropriate.
Thumbnail Image

AI Chatbot Health Advice: Essential Tips

2026-02-12
Mirage News
Why's our monitor labelling this an incident or hazard?
The article involves AI systems (chatbots) used for health advice and discusses potential harms such as misinformation and patient confusion. However, it does not report any actual harm or incident resulting from the AI's use, nor does it describe a specific event where harm was narrowly avoided. The focus is on general observations, benefits, risks, and expert commentary rather than a concrete AI Incident or AI Hazard. Therefore, it fits best as Complementary Information, providing context and understanding about AI's role in healthcare without reporting a new incident or hazard.
Thumbnail Image

Using an AI chatbot for health advice? Keep these tips in mind

2026-02-12
YaleNews
Why's our monitor labelling this an incident or hazard?
The article involves AI systems (chatbots using large language models) used for health advice, which can plausibly lead to harm such as misinformation causing patient confusion or misdiagnosis. However, the article primarily discusses general risks and benefits without reporting a specific event of realized harm or malfunction. Therefore, it does not meet the criteria for an AI Incident or AI Hazard. Instead, it serves as complementary information by providing expert analysis and guidance on the implications of AI chatbot use in healthcare, enhancing understanding of potential risks and benefits.
Thumbnail Image

Study warns AI chatbots can pose risks when used for medical advice

2026-02-12
azertag.az
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (large language models used as chatbots for medical advice) and discusses their use leading to potential harm to patients' health due to inaccurate or inconsistent medical information. The study's findings indicate that reliance on these AI systems for medical decisions poses real risks, which fits the definition of an AI Incident as the AI system's use has directly or indirectly led to harm or risk of harm to health. Although the article focuses on the study's warning rather than reporting specific cases of harm, the described risks and evidence from the study imply realized or imminent harm to users relying on AI for medical advice, qualifying it as an AI Incident rather than a mere hazard or complementary information.
Thumbnail Image

Why AI Is a Terrible Doctor (and Could Cost You a Fortune)

2026-02-13
Money Talks News
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (medical chatbots) used by people to diagnose health conditions. The AI's malfunction (hallucinations and inaccurate advice) has directly led to harm by misleading users about serious medical conditions, which can cause injury or worsen health outcomes (harm to health) and financial harm due to unnecessary or delayed medical treatment. Therefore, this event fits the definition of an AI Incident as the AI system's use has directly led to harm.
Thumbnail Image

Study Shows AI Chatbots Still Fall Short on Reliable Health Advice for Humans - TV360 Nigeria

2026-02-13
TV360 Nigeria
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (AI chatbots like GPT-4o, Llama 3, Command R+) used for health advice. The study warns that reliance on these AI systems could be risky and lead to incorrect diagnoses or failure to identify urgent medical situations, which is a plausible future harm. However, the article does not report any actual injuries, health harms, or rights violations resulting from the AI chatbots' use. The harm is potential rather than realized. Thus, the event fits the definition of an AI Hazard, as it describes circumstances where AI use could plausibly lead to harm but does not document an AI Incident.
Thumbnail Image

Un estudio advierte sobre los riesgos de usar chatbots de IA para buscar consejos médicos

2026-02-09
www.diariolibre.com
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (LLMs) used for medical advice. The study shows that these AI systems produce inaccurate and misleading information, which can cause harm to individuals' health decisions. Since the AI's use has directly or indirectly led to potential harm to people's health, this qualifies as an AI Incident under the definition of harm to health caused by AI system use.
Thumbnail Image

Estudio advierte sobre riesgos de usar chatbots de IA para buscar consejos médicos

2026-02-09
El Mercurio de Santiago
Why's our monitor labelling this an incident or hazard?
The article involves AI systems (large language models/chatbots) used for medical advice, which is a domain where harm to health is possible. However, the study reports on the gap between promise and actual utility and warns about risks, without describing any realized harm or incident. Therefore, this qualifies as an AI Hazard, as the use of these AI systems could plausibly lead to harm in health decisions, but no direct or indirect harm has been reported yet.
Thumbnail Image

Un estudio advierte sobre los riesgos de usar chatbots de inteligencia artificial para buscar consejos médicos

2026-02-09
Levante
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (LLMs) for medical advice, which is explicitly stated. The study demonstrates that the AI's outputs are inaccurate and inconsistent, leading users to make poor health decisions, which constitutes indirect harm to health. The AI system's use is the direct cause of this risk, fulfilling the criteria for an AI Incident. The article does not describe a future risk but an existing problem evidenced by the study's findings. Hence, it is not merely a hazard or complementary information but an incident involving AI-related harm.
Thumbnail Image

Un estudio advierte sobre los riesgos de usar chatbots de IA para buscar consejos médicos

2026-02-09
La Capital MdP
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (LLMs) used for medical symptom evaluation, which is a high-risk application area. The study documents that these AI systems generate misleading or incorrect advice, which could directly or indirectly lead to injury or harm to health (harm category a). While the article does not describe a specific incident of harm occurring, it warns about the credible risk of harm from using these AI chatbots for medical advice. Therefore, this qualifies as an AI Hazard because the development and use of these AI systems could plausibly lead to an AI Incident involving health harm in the future.
Thumbnail Image

Advierten sobre riesgos de buscar consejos médicos con chatbots de IA

2026-02-10
Diario Primicia
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (LLMs) used for medical advice, which is a high-stakes domain where inaccurate information can plausibly lead to harm to health (a). The study demonstrates that current LLMs are not reliable for direct patient use, indicating a credible risk of harm if deployed prematurely. However, the article does not report any actual injuries or health damages resulting from AI use, only the potential for such harm. Thus, this qualifies as an AI Hazard rather than an AI Incident. The article is not merely complementary information because it focuses on the risk and evaluation of harm potential, not on responses or updates to past incidents.
Thumbnail Image

Un estudio advierte sobre los riesgos de usar chatbots de IA para buscar consejos médicos

2026-02-09
Periódico El Día
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (LLMs) used for medical advice, which is a high-risk domain. The study reveals that these AI systems can provide inaccurate or misleading information, which could plausibly lead to harm to health if patients rely on them for diagnosis or treatment decisions. Since the harm is not reported as having occurred but is a credible risk identified by the study, this qualifies as an AI Hazard. The article does not describe an actual incident of harm but warns about plausible future harm from the AI system's use in healthcare.
Thumbnail Image

Toma nota: Este estudio revela los riesgos de usar la inteligencia artificial para consejos médicos

2026-02-10
FayerWayer
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (LLMs) used in medical advice, which is a high-stakes domain where incorrect information can lead to harm. The study identifies that current AI use in this context can plausibly lead to harm due to misleading or incorrect advice, but no actual harm or incident is reported. Therefore, this qualifies as an AI Hazard because it describes credible risks of harm from AI use in healthcare, but no realized harm or incident is documented. The article is not merely general AI news nor a response or update to a past incident, so it is not Complementary Information. It is not unrelated because it clearly involves AI systems and their impact on health decision-making.
Thumbnail Image

Universidad de Oxford: estudio revela riesgos de IA en salud

2026-02-09
UDG TV
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (LLMs) for health diagnosis and advice, which is explicitly stated. The study demonstrates that these AI systems provide inconsistent and sometimes misleading information, which can lead to harm to individuals' health if they rely on such advice. This constitutes an AI Incident because the AI system's use has directly led to harm (or at least significant risk of harm) to people seeking medical advice. The article describes realized harm in terms of poor decision-making and potential danger to patients, not just a hypothetical risk, thus qualifying as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Estudio advierte sobre los riesgos de usar chatbots de IA para buscar consejos médicos

2026-02-09
Noticias Venevisión
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (LLMs used as chatbots for medical advice). The study shows that these AI systems provide inaccurate or misleading information, which could plausibly lead to harm to users' health if they rely on such advice. Although no specific injury or harm is reported as having occurred, the potential for harm is credible and significant given the context of health decisions. Thus, the event fits the definition of an AI Hazard rather than an AI Incident. It is not Complementary Information because the article focuses on the risks and limitations of AI in this context rather than updates or responses to a prior incident. It is not Unrelated because AI involvement and plausible harm are central to the report.
Thumbnail Image

Un estudio de Oxford advierte sobre los riesgos de usar ChatGPT para buscar consejos médicos

2026-02-10
La Nacion
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (large language models like ChatGPT) for medical advice, which directly relates to health outcomes. The study demonstrates that AI use in this context can lead to incorrect diagnoses and potentially dangerous decisions, thus posing a risk of harm to users' health. Although the article does not report a specific incident of harm occurring, it clearly identifies the plausible risk of harm from AI use in medical advice. Therefore, this qualifies as an AI Hazard because the AI system's use could plausibly lead to harm, but no actual harm incident is described.
Thumbnail Image

¿Es buena idea usar los chatbots de IA para buscar consejos médicos? Un estudio advierte sobre sus riesgos

2026-02-10
Vanguardia
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (LLMs) for medical advice, which is explicitly stated. The study shows that these AI systems provide inconsistent and sometimes incorrect medical information, which can mislead users and potentially cause harm to their health. This constitutes indirect harm to health (a), fulfilling the criteria for an AI Incident. The article does not merely warn about potential risks but documents actual risks and shortcomings observed in real user interactions, confirming realized harm rather than just plausible future harm. Hence, the classification as AI Incident is appropriate.
Thumbnail Image

Estudio advierte sobre riesgos de usar chatbots de IA para consejos médicos

2026-02-10
Noticias SIN
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (LLMs) for medical advice, which is explicitly stated. The study shows that the AI-generated advice was inaccurate and inconsistent, leading to poor decision-making by users. This constitutes indirect harm to health, as users relying on the AI could be misled about their medical conditions and necessary actions. The article warns that current LLMs are not ready for direct patient care due to these risks. Hence, this qualifies as an AI Incident because the AI system's use has directly led to realized or very likely harm to individuals' health through misinformation and poor decision support.
Thumbnail Image

AI no better than other methods for patients seeking medical advice, study shows

2026-02-09
Reuters
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (large language models) used for medical advice, fulfilling the AI System criterion. However, the study reports no direct or indirect harm to patients; rather, it shows AI's performance is no better than other methods and sometimes misleading. There is no indication of injury, rights violations, or other harms occurring. The event does not describe a plausible future harm scenario beyond general concerns about AI's limitations. Instead, it provides research findings and expert commentary on AI's current capabilities and pitfalls, which fits the definition of Complementary Information. It enhances understanding of AI's impact in healthcare without reporting a specific AI Incident or AI Hazard.
Thumbnail Image

AI no better than other methods for patients seeking medical advice, study shows

2026-02-10
Rappler
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (large language models) used for medical advice, fulfilling the AI System criterion. However, the study reports on performance and potential pitfalls without evidence of actual harm or injury to patients. The AI's incorrect or misleading advice is noted, but no direct or indirect harm has occurred or is reported. The event is primarily a research study providing insights into AI's effectiveness and limitations, which fits the definition of Complementary Information as it enhances understanding of AI impacts without describing a new incident or hazard.
Thumbnail Image

AI No Better Than Other Methods for Patients Seeking Medical Advice, Study Shows

2026-02-09
Asharq Al-Awsat English
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (large language models) used for medical advice, which fits the definition of an AI system. However, the study reports on the AI's performance and its comparison to other methods without any reported injury, harm, or violation of rights. There is no indication that the AI caused direct or indirect harm to patients, nor that there was a plausible risk of harm beyond the study's findings. The main focus is on assessing AI's effectiveness and identifying gaps, which informs understanding of AI's role in healthcare. This aligns with Complementary Information, as it provides supporting data and context about AI systems' impacts without describing a specific incident or hazard causing harm.
Thumbnail Image

AI no better than other methods for patients seeking medical advice, study shows

2026-02-09
1470 & 100.3 WMBD
Why's our monitor labelling this an incident or hazard?
The article involves AI systems (large language models) used for medical advice, which fits the definition of an AI system. However, the study reports on the AI's performance and limitations without evidence of actual harm or injury resulting from AI use. The article does not describe any incident where AI caused or contributed to harm, nor does it suggest a plausible future harm scenario beyond general performance concerns. Therefore, this is not an AI Incident or AI Hazard. Instead, it provides complementary information about AI capabilities and limitations in healthcare, contributing to understanding and future assessment.
Thumbnail Image

Oxford study shows AI no better than other methods for patients seeking medical advice

2026-02-10
The Express Tribune
Why's our monitor labelling this an incident or hazard?
The article involves AI systems (large language models) used for medical advice, but the study reports no actual harm or injury caused by these AI systems. The findings show AI is no better than other methods and sometimes provides misleading advice, indicating potential risks but not confirmed incidents of harm. The article focuses on research results and the gap between AI potential and real-world use, which fits the definition of Complementary Information as it enhances understanding of AI impacts without reporting a specific incident or hazard.
Thumbnail Image

AI Health Chatbots Are Not Helping Patients Make Better Decisions

2026-02-10
ProPakistani
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (large language models used as health chatbots) and their use by people for medical advice. However, the article does not describe any realized harm or injury caused by these AI systems, only that their advice is not better than traditional methods and can sometimes be misleading. There is no direct or indirect harm reported, nor a plausible imminent risk of harm described. The main focus is on research findings assessing AI performance and highlighting gaps, which fits the definition of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

Chatbots fall short in guiding patients on medical decisions

2026-02-11
Nigeria Sun
Why's our monitor labelling this an incident or hazard?
The article involves AI systems (large language models used as medical chatbots) and their use in medical decision-making. Although the AI systems sometimes provide misleading or inaccurate advice, the article does not report any actual injury, harm, or violation of rights resulting from their use. The study highlights a performance gap and potential risks but does not document realized harm or incidents. Therefore, this event does not meet the criteria for an AI Incident. It also does not describe a specific plausible future harm event or credible warning of imminent harm, so it is not an AI Hazard. Instead, it provides research findings that inform understanding of AI's current limitations and risks in healthcare, which fits the definition of Complementary Information.
Thumbnail Image

هوش مصنوعی دستیار جدید متخصصان مغز و اعصاب

2026-02-11
عصر ايران،سايت تحليلي خبري ايرانيان سراسر جهان www.asriran.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Prima) used for interpreting MRI scans, which fits the definition of an AI system. However, the article does not report any harm, malfunction, or misuse resulting from the AI system's use. Instead, it highlights the system's development, evaluation, and potential benefits. Since no harm has occurred or is reported as plausible in the near term, and the article mainly provides information about the AI system's capabilities and research progress, it fits the category of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

مشاوره پزشکی با هوش مصنوعی می‌تواند خطرناک باشد

2026-02-10
خبرآنلاین
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (medical chatbots) explicitly used for medical consultation. The study shows that these AI systems provide unreliable advice, which can lead to harmful health decisions by users. This is a direct link between AI use and potential harm to health, fulfilling the criteria for an AI Incident. The article describes realized harm risks from AI use, not just potential future harm, and discusses the AI's malfunction or limitations in real-world use. Therefore, it is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

هر روز با هوش مصنوعی حرف می‌زنید؟ مراقب این خطر باشید!

2026-02-11
خبرآنلاین
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (chatbots) and discusses a health-related risk (depression symptoms) associated with their frequent use. Although the study finds a significant correlation, it does not confirm that AI chatbot use directly or indirectly caused harm, only that there is a plausible risk. No actual incident of harm is described, so it is not an AI Incident. The article does not focus on responses, governance, or updates, so it is not Complementary Information. Hence, the event is best classified as an AI Hazard due to the plausible future harm from AI chatbot use.
Thumbnail Image

هوش مصنوعی دستیار جدید متخصصان مغز و اعصاب

2026-02-10
ایسنا
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as analyzing MRI brain scans and providing diagnostic outputs. However, there is no indication that the AI system has caused any injury, violation of rights, disruption, or other harms. The system is still under evaluation and research, with no reported incidents of harm or malfunction. Therefore, this is not an AI Incident or AI Hazard. The article provides complementary information about the development and potential impact of the AI system in medical imaging, fitting the definition of Complementary Information.
Thumbnail Image

از جمینای نِرد تا گراک یاغی؛ نگاهی به شکل‌گیری شخصیت چت‌بات‌های هوش مصنوعی

2026-02-09
خبرگزاری مهر | اخبار ایران و جهان | Mehr News Agency
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (chatbots) and their personalities, which are AI system characteristics. It discusses some harms that have occurred or could occur (e.g., a chatbot encouraging suicidal thoughts, generating inappropriate content, or censoring information), but these are presented as examples or general concerns rather than a report of a specific incident or hazard event. The main focus is on the conceptual and developmental aspects of AI personalities and their implications, making this a piece of complementary information that enhances understanding of AI system behavior and risks rather than reporting a new AI Incident or AI Hazard.
Thumbnail Image

هشدار دانشمندان آکسفورد: مشاوره پزشکی با هوش مصنوعی می‌تواند خطرناک باشد

2026-02-10
خبرگزاری مهر | اخبار ایران و جهان | Mehr News Agency
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI chatbots (AI systems) used for medical advice and documents a study showing that their outputs can be unreliable and potentially dangerous. The AI's involvement is in its use for medical decision-making, which has directly led to risks of harm to users' health. This fits the definition of an AI Incident because the AI system's use has directly or indirectly led to harm (or risk of harm) to people's health. The article does not merely warn about potential future harm but reports on realized risks and observed failures in AI medical advice, thus qualifying as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

ورود اوپن‌ای‌آی به زیرساخت هوش مصنوعی پنتاگون؛ کارشناسان درباره ریسک‌های احتمالی هشدار می‌دهند

2026-02-11
خبرگزاری مهر | اخبار ایران و جهان | Mehr News Agency
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (customized ChatGPT) in a critical infrastructure context (military defense). Although no direct harm or incident has occurred yet, experts explicitly warn about plausible risks such as overreliance on AI outputs leading to degraded security measures and potential vulnerabilities from handling sensitive data. This fits the definition of an AI Hazard, as the development and deployment of AI in this sensitive environment could plausibly lead to harms like disruption of critical infrastructure management or security breaches. The article does not describe any realized harm or incident, nor is it primarily about responses or governance measures, so it is not an AI Incident or Complementary Information.
Thumbnail Image

رییس سیسکو: برای ایجنتهای هوش مصنوعی هم بررسی سوابق لازم است

2026-02-11
euronews
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (AI agents) being developed and used by Cisco, including their autonomous capabilities and the company's emphasis on safety and security to prevent harm. No actual harm or incident is reported; instead, the discussion centers on the plausible future risks and the need for safeguards. This fits the definition of an AI Hazard, as the development and deployment of these AI agents could plausibly lead to incidents if safety is not ensured. It is not Complementary Information because the focus is not on updates or responses to past incidents, nor is it unrelated since AI systems and their risks are central to the article.
Thumbnail Image

Universität Oxford: KI-Chatbots liefern oft falsche Diagnosen bei Notfällen

2026-02-11
N-tv
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (medical AI chatbots based on large language models) whose use in providing medical advice has been experimentally shown to produce incorrect diagnoses and recommendations in real interactions with users. This can directly lead to harm to health (harm category a) if users follow the faulty advice, especially in emergencies. The study highlights the AI systems' poor performance and the risk of misdiagnosis and delayed or inappropriate care, which is a direct harm caused by the AI system's outputs. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information, as the harm is realized or highly likely in practice.
Thumbnail Image

Schlechter Rat von Doktor Chatbot: Sprachmodelle scheitern bei medizinischen Diagnosen

2026-02-11
Der Tagesspiegel
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (large language models/chatbots) used for medical diagnosis and advice. The study demonstrates that these AI systems often fail to provide correct diagnoses or appropriate next steps, which could plausibly lead to harm if users rely on them. No actual harm is reported as having occurred, but the documented inaccuracies and risks in real-world use constitute a credible potential for harm. Hence, this is an AI Hazard rather than an AI Incident. The article is not merely general AI news or a complementary update but focuses on the risk posed by AI chatbots in healthcare advice.
Thumbnail Image

Chatbots in der Medizin: Hohe Fehlerquote bei Diagnosen

2026-02-10
Süddeutsche Zeitung
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (large language models/chatbots) used for medical diagnosis and advice, which is explicitly stated. The study shows that these AI systems, when used by laypersons, lead to incorrect diagnoses and advice, which can cause harm to patients' health (harm category a). The harm is indirect, stemming from the AI system's use and the communication failure between humans and AI, leading to misinterpretation and wrong medical decisions. The article highlights the danger of relying on these AI chatbots for health issues, confirming realized harm potential. Hence, this is an AI Incident rather than a hazard or complementary information, as the harm is demonstrated and ongoing in the context of user interactions.
Thumbnail Image

KI stellt mehr als 65 Prozent falsche Diagnosen

2026-02-09
Berner Zeitung
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (large language model chatbots) used for medical diagnosis and advice. The study demonstrates that these AI systems, when used by laypersons, frequently produce incorrect diagnoses and wrong action recommendations, which can directly harm users' health by misleading them about their medical conditions and emergency needs. The harm is realized, not just potential, as users are misled by the AI outputs. This fits the definition of an AI Incident because the AI system's use has directly led to harm to health (a). The article does not merely warn about potential harm or discuss responses but reports on actual diagnostic failures and their consequences, so it is not an AI Hazard or Complementary Information. It is not unrelated because the AI system is central to the event and harm.
Thumbnail Image

Studie zeigt Schwächen von KI bei medizinischen Diagnosen

2026-02-11
IT BOLTWISE® x Artificial Intelligence
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (medical diagnostic chatbots) and their use, but no realized harm or incident is described. The study's findings indicate potential risks or limitations but do not report any actual injury, rights violation, or other harm caused by the AI systems. Therefore, this is not an AI Incident or AI Hazard. The article provides contextual information about AI capabilities and limitations, which fits the definition of Complementary Information, as it enhances understanding of AI systems' performance and informs future risk assessment and development.
Thumbnail Image

IA dá conselhos de saúde, mas estudo revela: muitos estão errados

2026-02-19
Estadão
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (chatbots like ChatGPT and Llama) used for health advice, which is explicitly stated. The study shows that these AI systems often provide incorrect or misleading medical information, leading users to potentially harmful decisions. This constitutes indirect harm to health (a), as users rely on AI advice that is inaccurate or inconsistent. The article describes realized harm in the form of poor medical advice and potential health risks, not just a hypothetical risk. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information. The AI system's use is the cause of the harm, fulfilling the definition of an AI Incident.
Thumbnail Image

IA dá conselhos de saúde, mas estudo revela: muitos estão errados - Folha Vitória

2026-02-19
Folha Vitória
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (chatbots like ChatGPT and Llama) used for health advice, which is explicitly stated. The study shows that these AI systems have provided incorrect or misleading medical information, which can cause injury or harm to users' health. The harm is realized or at least strongly implied, as users may follow wrong advice leading to health risks. This fits the definition of an AI Incident because the AI system's use has directly led to harm to persons. The article does not merely discuss potential risks or improvements but documents actual performance failures with health implications.
Thumbnail Image

Conselhos médicos dados por IAs como o ChatGPT são imprecisos e inconsistentes, revela estudo de Oxford

2026-02-22
MediaTalks em UOL
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (LLM-based chatbots) used for medical advice. The study shows that the AI's use leads to imprecise and inconsistent advice, which can harm users' health by misleading them about medical conditions and appropriate actions. This constitutes indirect harm to health caused by the AI system's use. Therefore, this qualifies as an AI Incident under the definition of harm to health caused directly or indirectly by AI system use.