ChatGPT and Gemini Spread Misinformation from Fabricated Blog

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A BBC journalist created a fake blog post with invented facts, which was quickly cited as truth by AI systems ChatGPT, Google Gemini, and AI Overviews. The incident exposed a vulnerability where these AI tools disseminated false information to users, highlighting risks of unverified content influencing AI-generated responses.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves AI systems (ChatGPT and Google's Gemini) that generate responses based on large language models and internet data. The journalist's fabricated blog was used by these AI systems as a source, causing them to output false information to users. This is a direct use of AI systems leading to harm by spreading misinformation, which affects communities and potentially other societal domains. The harm is realized, not just potential, as the AI systems have already repeated the false claims. Hence, it meets the criteria for an AI Incident due to the direct link between AI use and harm through misinformation dissemination.[AI generated]
AI principles
Robustness & digital securityTransparency & explainability

Industries
Media, social platforms, and marketing

Affected stakeholders
ConsumersGeneral public

Harm types
Public interestReputational

Severity
AI incident

Business function:
Citizen/customer service

AI system task:
Interaction support/chatbotsContent generation


Articles about this incident or hazard

Thumbnail Image

Engañan a Gemini y ChatGPT para que arrojen información falsa con solo escribir un blog

2026-02-23
La Voz de Galicia
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (ChatGPT and Google's Gemini) that generate responses based on large language models and internet data. The journalist's fabricated blog was used by these AI systems as a source, causing them to output false information to users. This is a direct use of AI systems leading to harm by spreading misinformation, which affects communities and potentially other societal domains. The harm is realized, not just potential, as the AI systems have already repeated the false claims. Hence, it meets the criteria for an AI Incident due to the direct link between AI use and harm through misinformation dissemination.
Thumbnail Image

Engañan a Gemini y ChatGPT para que den información falsa en sus...

2026-02-23
europa press
Why's our monitor labelling this an incident or hazard?
The AI systems (ChatGPT and Google's Gemini) are explicitly involved as they generate responses based on their training and internet searches. The journalist's deliberate insertion of false data led these AI systems to propagate misinformation, directly causing harm by misleading users. This fits the definition of an AI Incident because the AI's use has directly led to harm to communities through misinformation. The event is not merely a potential risk but a realized harm, as the false information was actively repeated by the AI systems. Hence, it is not an AI Hazard or Complementary Information but an AI Incident.
Thumbnail Image

Engañan a Gemini y ChatGPT con un blog falso: periodista expone vulnerabilidad en IA de Google y OpenAI

2026-02-23
La Nación, Grupo Nación
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (large language models) whose use led to the dissemination of false information due to their reliance on unverified online content. Although no direct harm has been reported, the demonstrated vulnerability plausibly could lead to significant harms such as misinformation affecting public health or political processes. Therefore, this qualifies as an AI Hazard because it describes a credible risk of harm stemming from AI system use, but no actual harm has yet materialized as per the article. The event is not Complementary Information because it is not primarily about responses or governance actions but about the vulnerability itself. It is not an AI Incident since no realized harm is described.
Thumbnail Image

Engañan a Gemini y ChatGPT para que den información falsa en sus...

2026-02-23
Notimérica
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (ChatGPT, Gemini) that generate responses based on large language models and internet data. The journalist's fabricated blog content was used by these AI systems as factual input, causing them to provide false information to users. This is a direct use of AI systems leading to harm by spreading misinformation, which can affect communities and public trust. The harm is realized, not just potential, as users receive and may rely on false information. Hence, this qualifies as an AI Incident under the framework, specifically harm to communities through misinformation dissemination.
Thumbnail Image

Engañan a Gemini y ChatGPT para que den información falsa en sus resultados, con solo escribir un blog con dichos datos

2026-02-24
La Nacion
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (large language models powering ChatGPT and Gemini) whose use led to the dissemination of false information, a form of harm to communities and public trust. The journalist's fabricated blog was used as a source by the AI, causing the AI to present false claims as facts. This is a direct AI Incident because the AI systems' outputs caused misinformation harm. The article explicitly states that the AI systems repeated the false claims and cited the fabricated blog as a reliable source, confirming realized harm. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Descubren cómo engañar a cualquier IA en menos de 20 minutos

2026-02-24
ADN40
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (ChatGPT, Google Gemini, AI Overviews) that generated outputs based on false inputs from a fabricated blog. The AI systems' behavior directly led to the spread of false information, which harms communities by misleading users and degrading information quality. This fits the definition of an AI Incident because the AI systems' use and malfunction (lack of verification) directly caused harm. The event is not merely a potential risk but a realized harm, as the false information was actively propagated by the AI systems. Therefore, the classification is AI Incident.
Thumbnail Image

Experimento revela grietas en chatbots Gemini y ChatGPT

2026-02-24
DiarioDigitalRD
Why's our monitor labelling this an incident or hazard?
The AI systems (ChatGPT and Gemini) are explicitly involved as they generated and presented false information as true, based on unverified internet content. This use of AI directly led to harm in the form of misinformation dissemination, which is a harm to communities and potentially other societal harms. The event is not merely a potential risk but a realized incident of AI systems propagating falsehoods. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Logran engañar a Gemini y ChatGPT con un simple blog: así se manipulan las respuestas automáticas en buscadores

2026-02-26
infobae
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (ChatGPT, Gemini) whose use in generating responses was directly influenced by manipulated online content, leading to the spread of false information. This constitutes harm to communities by disseminating misinformation, fulfilling the criteria for an AI Incident. The AI systems' development and use, specifically their data retrieval and response generation mechanisms, played a pivotal role in causing this harm. Therefore, this event is best classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Periodistas y 'hot dogs'

2026-02-27
Semana.com Últimas Noticias de Colombia y el Mundo
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (ChatGPT, Gemini, Google AI Overviews) whose outputs were deliberately manipulated by exploiting data voids and SEO to insert false information. This manipulation caused the AI systems to generate and disseminate misleading content, which can harm users by providing false information, thus impacting the reliability of digital information and potentially leading to harm to communities. Although the experiment was a controlled journalistic test, it reveals a structural vulnerability that can be exploited maliciously, constituting an AI Incident due to the realized harm of misinformation dissemination and erosion of trust in AI-generated information.