AI Chatbots Spread Health Misinformation on Vaccines

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

AI chatbots like Grok, ChatGPT, and Gemini have been found to provide inaccurate or misleading information on sensitive topics such as COVID-19 vaccines, sometimes repeating debunked claims or fabricating sources. This misinformation poses risks to public health when users rely on these AI-generated responses without verification.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves AI systems (chatbots) whose use has directly led to harm in the form of misinformation and disinformation, particularly on public health topics. The article documents realized harm (false claims repeated, fabricated links, and incorrect citations) that can negatively impact communities' health and well-being. Therefore, this qualifies as an AI Incident due to harm to communities caused by the AI systems' malfunction and use.[AI generated]
AI principles
AccountabilitySafetyTransparency & explainabilityRobustness & digital securityHuman wellbeing

Industries
Healthcare, drugs, and biotechnologyMedia, social platforms, and marketing

Affected stakeholders
ConsumersGeneral public

Harm types
Public interestReputationalPsychological

Severity
AI incident

Business function:
Citizen/customer service

AI system task:
Interaction support/chatbotsContent generation


Articles about this incident or hazard

Thumbnail Image

Ferramentas de IA podem falhar em checagens e buscas. Entenda por quê

2025-06-11
UOL notícias
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (chatbots) whose use has directly led to harm in the form of misinformation and disinformation, particularly on public health topics. The article documents realized harm (false claims repeated, fabricated links, and incorrect citations) that can negatively impact communities' health and well-being. Therefore, this qualifies as an AI Incident due to harm to communities caused by the AI systems' malfunction and use.
Thumbnail Image

Ferramentas de IA podem falhar em checagens e buscas

2025-06-11
Jornal Estado de Minas | Not�cias Online
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems (chatbots using large language models) and their use in providing information. It documents the AI systems' frequent inaccuracies and misinformation, which could plausibly lead to harm such as public health risks or misinformation-related societal harm. However, it does not describe a concrete event where harm has already materialized directly from the AI outputs. Instead, it presents a credible risk of harm due to the AI's misinformation capabilities and the potential for users to follow incorrect advice. Therefore, this qualifies as an AI Hazard rather than an AI Incident. It is not merely complementary information because the focus is on the risk and prevalence of misinformation causing potential harm, not on responses or ecosystem updates. It is not unrelated because AI systems are central to the discussion and the potential for harm is clearly articulated.
Thumbnail Image

Ferramentas de IA podem falhar em checagens e buscas. Entenda por quê

2025-06-11
nsctotal.com.br
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (chatbots like Grok, ChatGPT, Gemini) whose use has directly led to the dissemination of false or misleading information about health topics, including vaccines. This misinformation can cause harm to individuals' health and to communities by spreading disinformation. The article provides concrete examples of AI-generated false claims and fabricated links, demonstrating realized harm rather than just potential risk. Hence, it meets the criteria for an AI Incident due to indirect harm to health and communities caused by AI system use.
Thumbnail Image

Ferramentas de IA podem falhar em checagens e buscas. Entenda por quê

2025-06-12
O Povo
Why's our monitor labelling this an incident or hazard?
While the article clearly involves AI systems (chatbots like Grok, ChatGPT, Gemini), it does not describe a specific incident where harm has directly or indirectly occurred due to these AI systems. Instead, it reports on the general performance and reliability issues of AI chatbots, which could lead to misinformation but does not document a concrete harm event. Therefore, this is a discussion of potential risks and limitations rather than a realized harm or a specific hazard event. It fits best as Complementary Information, providing context and analysis about AI system behavior and its implications for misinformation.
Thumbnail Image

Ferramentas de IA podem falhar em checagens e buscas; entenda por quê

2025-06-11
JC
Why's our monitor labelling this an incident or hazard?
The article describes the use and malfunction of AI systems (chatbots like Grok, ChatGPT, etc.) that produce inaccurate or misleading information, which can indirectly lead to harm such as misinformation affecting public health decisions. However, it does not document a specific event where such harm has concretely occurred or been directly linked to an AI system's output causing injury, rights violations, or other harms. Instead, it provides a detailed analysis and warnings about potential risks and observed inaccuracies, supported by studies and expert commentary. Therefore, it fits best as Complementary Information, providing context and understanding about AI system limitations and their societal implications, rather than reporting a discrete AI Incident or AI Hazard.
Thumbnail Image

Ferramentas de IA podem falhar em checagens e buscas. Entenda por quê

2025-06-13
Home
Why's our monitor labelling this an incident or hazard?
The article discusses the use and limitations of AI systems (chatbots) and highlights a case where an AI system provided incorrect information that could mislead users about vaccine side effects. This constitutes an AI Incident because the AI system's use has indirectly led to potential harm by spreading misinformation that can affect public health perceptions. The harm is linked to misinformation about vaccines, which can harm communities and public health. Therefore, this is classified as an AI Incident.