Brazilian TV Airs AI-Generated Fake News Image, Spreads Misinformation

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Brazilian broadcaster SBT's program 'Se Liga Brasil' aired a fake image generated by AI, presenting it as real news about alleged misogyny at a São Paulo gas station. The misinformation led to public debate and criticism. SBT admitted the error, citing a breach of journalistic standards, and implemented internal corrective measures.[AI generated]

Why's our monitor labelling this an incident or hazard?

An AI system was involved as the image was generated by AI. The use of this AI-generated image without proper verification led to the spread of false information on a public broadcast, which is a harm to communities by spreading misinformation. The harm is realized, not just potential, as the false image was aired and discussed as if real. Therefore, this qualifies as an AI Incident under the definition of harm to communities caused directly or indirectly by an AI system's output.[AI generated]
AI principles
AccountabilityTransparency & explainability

Industries
Media, social platforms, and marketing

Affected stakeholders
BusinessGeneral public

Harm types
Reputational

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

SBT reconhece falha após exibir reportagem com imagem gerada por IA

2026-04-13
uol.com.br
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate an image that was then used in a news report, which is an example of AI involvement. However, the event describes a failure in editorial judgment or verification rather than harm caused by the AI system itself. There is no evidence of injury, rights violation, or other harms as defined for an AI Incident. The broadcaster's internal response to the mistake fits the definition of Complementary Information, as it provides an update on mitigation following a prior issue involving AI-generated content. Therefore, this event is best classified as Complementary Information rather than an Incident or Hazard.
Thumbnail Image

SBT espalha fake news feita por Inteligência Artificial e toma atitude

2026-04-13
Terra
Why's our monitor labelling this an incident or hazard?
An AI system was involved as the image was generated by AI. The use of this AI-generated image without proper verification led to the spread of false information on a public broadcast, which is a harm to communities by spreading misinformation. The harm is realized, not just potential, as the false image was aired and discussed as if real. Therefore, this qualifies as an AI Incident under the definition of harm to communities caused directly or indirectly by an AI system's output.
Thumbnail Image

SBT se manifesta sobre foto gerada por IA exibida em telejornal; entenda o caso

2026-04-14
ISTOÉ Independente
Why's our monitor labelling this an incident or hazard?
An AI system generated a fictional image that was mistakenly presented as real news, causing misinformation. The broadcaster admitted the error and took internal corrective measures. There is no evidence of direct or indirect harm such as physical injury, legal rights violations, or significant community harm. The main issue is misinformation dissemination and journalistic standards breach, which the broadcaster is addressing. Therefore, this event does not meet the threshold for an AI Incident or AI Hazard but rather serves as Complementary Information illustrating societal and governance responses to AI-generated misinformation in media.
Thumbnail Image

O que o SBT tem a dizer sobre uso de imagem fake feita por IA em telejornal

2026-04-14
VEJA
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate a fake image that was broadcasted as real news, leading to misinformation and potential social harm. The broadcaster's failure to verify the AI-generated content before airing it contributed to the harm. This fits the definition of an AI Incident because the AI system's use directly led to harm to communities by spreading false information. The broadcaster's response is a complementary action but does not change the classification of the original event as an AI Incident.
Thumbnail Image

SBT exibe notícia falsa criada por IA ao vivo em telejornal

2026-04-14
VEJA
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate a false image that was presented as real news, leading to the spread of misinformation. The harm here is the violation of informational integrity and potential harm to community trust and social cohesion, which fits the definition of harm to communities. The AI system's role in creating the false content is pivotal to the incident. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

SBT comete erro e divulga notícia falsa criada por IA ao vivo em jornal

2026-04-14
Notícias da TV
Why's our monitor labelling this an incident or hazard?
An AI system was involved in generating the false image that was mistakenly presented as real news, which led to the spread of misinformation. This misinformation can be considered harm to communities as it spreads false narratives, but the article does not report any direct or indirect harm resulting from this misinformation (e.g., no reported injury, legal violation, or significant disruption). The broadcaster's admission and corrective actions suggest this is a known incident with mitigation underway. Therefore, this event qualifies as an AI Incident due to the realized misinformation harm caused by the AI-generated content being treated as factual news.
Thumbnail Image

Apresentador do SBT divulga notícia falsa criada por IA e emissora se manifesta

2026-04-13
Portal Leo Dias
Why's our monitor labelling this an incident or hazard?
An AI system was used to create false news content that was broadcasted, leading to social harm through misinformation and offense. The AI's role in generating the false content is direct and pivotal to the incident. The harm includes violation of community trust and potential harm to social cohesion due to spreading misogynistic misinformation. The broadcaster's response confirms the recognition of harm caused. Therefore, this event meets the criteria for an AI Incident.