AI-Generated Disinformation Targets Misogyny Bill in Brazil

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A coordinated disinformation campaign in Brazil used AI-generated videos and content to spread false narratives about the Misogyny Bill (PL 896/2023) on social media. Influential politicians amplified these AI-created materials, misleading the public and distorting democratic debate, according to a study by Observatório Lupa.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves the use of AI systems to generate false videos and content that are part of a disinformation campaign targeting a legislative proposal. The AI-generated misinformation has directly contributed to harm by misleading the public and fostering false narratives, which can be considered harm to communities and a violation of rights. Therefore, the event meets the criteria for an AI Incident due to the realized harm caused by AI-generated disinformation.[AI generated]
AI principles
Transparency & explainabilityDemocracy & human autonomy

Industries
Media, social platforms, and marketing

Affected stakeholders
General public

Harm types
Public interest

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Pesquisa identifica campanha de fake news contra proposta de criminizalizar a misoginia

2026-05-11
Diário do Centro do Mundo
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI-generated content as part of a disinformation campaign, which involves an AI system's use contributing indirectly to harm (misinformation harming communities). However, the article focuses on reporting the findings of a study analyzing this campaign rather than describing a new AI Incident or a new AI Hazard. The harm (misinformation) is ongoing and recognized, but the article's main purpose is to provide supporting data and context about the campaign and AI's involvement. This fits the definition of Complementary Information, as it enhances understanding of AI's impact on societal issues without reporting a new primary harm event or a new plausible hazard.
Thumbnail Image

Desinformação sobre PL da Misoginia cresce nas redes, diz estudo - Revista Fórum

2026-05-10
Revista Fórum
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to generate false videos and content that are part of a disinformation campaign targeting a legislative proposal. The AI-generated misinformation has directly contributed to harm by misleading the public and fostering false narratives, which can be considered harm to communities and a violation of rights. Therefore, the event meets the criteria for an AI Incident due to the realized harm caused by AI-generated disinformation.
Thumbnail Image

Estudo aponta desinformação em massa sobre PL da Misoginia nas redes - Jornal de Brasília

2026-05-10
Jornal de Brasília
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI-generated content as part of the disinformation campaign. This content has directly contributed to the spread of false information about the law, which constitutes harm to communities by undermining informed public discourse and potentially violating rights to truthful information. Since the harm is occurring through the use of AI systems to generate misleading content, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to harm to communities through misinformation.
Thumbnail Image

Desinformação sobre PL da Misoginia cresce nas redes

2026-05-10
Jornal Diário do Grande ABC
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI was used to produce false videos as part of a disinformation campaign targeting a legislative proposal. The disinformation is causing harm by misleading the public, spreading false narratives, and fostering social division, which qualifies as harm to communities. Since the AI-generated content is actively contributing to this harm, this event meets the criteria for an AI Incident due to indirect harm caused by the AI system's outputs.
Thumbnail Image

Desinformação sobre PL da Misoginia cresce nas redes, diz estudo

2026-05-10
O Cafezinho
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly mentioned as being used to produce false and misleading content targeting a legislative proposal. The disinformation campaign has already caused harm by spreading false narratives and conspiracies, which can undermine social trust and democratic processes, thus harming communities. Since the AI-generated content is a direct factor in this harm, the event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Estudo aponta avanço de desinformação sobre PL da Misoginia nas redes sociais

2026-05-10
News Rondonia
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to create videos spreading false information about the legislative proposal, indicating AI system involvement. The harm is related to misinformation spreading on social media, which can harm communities by distorting public understanding and debate. However, the article focuses on a study analyzing this misinformation rather than reporting a specific AI Incident causing direct or indirect harm or a plausible future harm scenario. The AI-generated content is part of the misinformation ecosystem, but the article's main focus is on the research findings and the broader context of misinformation, making it Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

Desinformação sobre PL da Misoginia cresce nas redes, diz estudo

2026-05-10
Tribuna do Sertão
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of artificial intelligence to create false videos as part of a disinformation campaign. This campaign spreads false narratives and conspiracies that mislead the public about a legislative bill, causing harm to communities by fostering misinformation and social division. The AI-generated content is a direct contributing factor to the harm, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, as the disinformation is actively disseminated and engaged with on social media platforms.
Thumbnail Image

Desinformação sobre PL da Misoginia cresce nas redes, diz estudo

2026-05-10
R7 Notícias
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI was used to create false videos and content that spread misinformation about the bill, which has influenced public discourse and political narratives. The disinformation campaign has caused harm to communities by distorting democratic debate and spreading falsehoods, fulfilling the criteria for harm to communities. Since the AI system's use in generating false content is a direct factor in this harm, the event is classified as an AI Incident.
Thumbnail Image

Desinformação sobre PL da Misoginia cresce nas redes | Blog do Esmael

2026-05-11
Blog do Esmael
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of artificial intelligence to produce false videos as part of a disinformation campaign targeting a legislative bill. The disinformation is actively spreading on social media, causing harm by misleading the public and distorting democratic debate, which qualifies as harm to communities. Since the AI system's use directly contributes to this harm, this event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Desinformação sobre PL da Misoginia cresce nas redes, diz estudo - OPINIÃO E NOTÍCIA

2026-05-10
Opinião e Notícia
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to generate false content that spreads misinformation about a legislative bill, which is causing harm to communities by fostering confusion, fear, and social division. The AI system's use in producing misleading videos is directly linked to the dissemination of harmful disinformation. This fits the definition of an AI Incident because the AI system's use has directly led to harm to communities through misinformation and distortion of public discourse.
Thumbnail Image

Desinformação sobre PL da Misoginia cresce nas redes, diz estudo

2026-05-10
JC
Why's our monitor labelling this an incident or hazard?
The presence of AI is reasonably inferred from the mention of AI-generated videos used in the disinformation campaign. The event involves the use of AI in the creation of misleading content, which could plausibly lead to harm such as misinformation spreading and social harm. However, the article does not document a specific AI Incident with realized harm but rather reports on the study's findings about the use of AI in disinformation. This fits the definition of Complementary Information, as it provides supporting data and context about AI's role in societal issues without describing a new AI Incident or Hazard itself.
Thumbnail Image

Desinformação sobre PL da Misoginia cresce nas redes, diz estudo

2026-05-10
ContilNet Notícias
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to generate false content as part of a disinformation campaign that is currently active and causing harm by spreading misinformation about a legislative proposal. The AI-generated content is directly contributing to the harm by misleading the public and disrupting informed democratic processes, which fits the definition of an AI Incident due to harm to communities and violation of rights. Therefore, this event is classified as an AI Incident.
Thumbnail Image

Desinformação sobre PL da Misoginia cresce nas redes, diz estudo

2026-05-11
TDTNEWS
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to create false videos that misinform the public about the legislative proposal, which is causing social harm by spreading disinformation and fear. This fits the definition of an AI Incident because the AI system's use has directly led to harm to communities through misinformation dissemination. Therefore, the event qualifies as an AI Incident.