AI-Generated Persona 'Dona Maria' Fuels Political Polarization in Brazil

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

An AI-generated digital influencer, 'Dona Maria,' created using Google's Gemini, went viral in Brazil by posting aggressive, politically charged content criticizing President Lula and the Supreme Court. The AI avatar's widespread reach and influence raised concerns about manipulation of public opinion, electoral integrity, and potential violations of election laws.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves an AI system explicitly mentioned (Google's Gemini and other AI tools) used to generate political content that has a large social impact. The AI-generated avatar influences public opinion and election-related discourse, with risks of misinformation and confusion, which are harms to communities and potentially violations of electoral laws. The article details realized social and political harms, not just potential risks, and discusses challenges in accountability and regulation. This fits the definition of an AI Incident because the AI system's use has indirectly led to significant harm to communities and possible legal violations in the electoral context.[AI generated]
AI principles
Transparency & explainabilityDemocracy & human autonomy

Industries
Media, social platforms, and marketing

Affected stakeholders
General publicGovernment

Harm types
Public interestReputational

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

A personagem de IA que viraliza com críticas ao governo Lula e ao STF

2026-04-14
Terra
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly mentioned (Google's Gemini and other AI tools) used to generate political content that has a large social impact. The AI-generated avatar influences public opinion and election-related discourse, with risks of misinformation and confusion, which are harms to communities and potentially violations of electoral laws. The article details realized social and political harms, not just potential risks, and discusses challenges in accountability and regulation. This fits the definition of an AI Incident because the AI system's use has indirectly led to significant harm to communities and possible legal violations in the electoral context.
Thumbnail Image

A personagem de IA que viraliza com críticas ao governo Lula e ao STF

2026-04-14
Correio Braziliense
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system to generate realistic political content that has already influenced public discourse and engagement. Although some misinformation has been disseminated, no direct harm such as legal violations or physical injury is reported as having occurred. The main risk is the plausible future harm to electoral integrity, voter confusion, and potential manipulation of democratic processes. This fits the definition of an AI Hazard, where the AI system's use could plausibly lead to an AI Incident. The article also discusses governance challenges and regulatory responses, but the primary focus is on the potential risks rather than realized harm, so it is not Complementary Information. Hence, the classification is AI Hazard.
Thumbnail Image

Dona Maria: a "IA bolsonarista" que viralizou com ataques a Lula e ao STF

2026-04-14
Diário do Centro do Mundo
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system to create a digital persona that influences political discourse and social media engagement. However, it does not report any direct or indirect harm such as misinformation causing societal harm, violation of rights, or legal breaches that have already occurred. Instead, it highlights concerns about potential ethical issues and the need for labeling AI-generated content to avoid confusion and possible future legal consequences. Since the main focus is on describing the phenomenon, its social impact, and ethical debates rather than a concrete AI Incident or imminent hazard, the classification as Complementary Information is appropriate.
Thumbnail Image

A personagem de IA que viraliza com críticas ao governo Lula e ao STF

2026-04-14
O Povo
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (video generation platforms, language models) to create political content that has already influenced public opinion and electoral dynamics, which constitutes harm to communities and potentially violates electoral laws. The content's role in spreading politically charged messages, some of which may be false or misleading, and the difficulty in attributing responsibility, pose significant risks to democratic processes and electoral fairness. These factors meet the criteria for an AI Incident, as the AI system's use has directly and indirectly led to harm in the form of misinformation, manipulation, and potential violations of electoral regulations.
Thumbnail Image

A personagem de IA viralizada qua critica Lula e o STF - 14/04/2026 - Política - Folha

2026-04-14
Folha de S.Paulo
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system to create a viral political avatar that spreads critical and emotionally charged messages about political figures and institutions. The AI system's outputs have directly influenced public opinion and political discourse, with potential impacts on elections and democratic processes, which constitute harm to communities and possibly violations of electoral regulations. The article details the AI system's use, the content's widespread dissemination, and expert concerns about its societal impact, fulfilling the criteria for an AI Incident. Although no physical injury or direct legal enforcement is described, the harm to community trust, political stability, and potential legal breaches related to election propaganda are significant and realized. Hence, the classification as AI Incident is appropriate.
Thumbnail Image

Personagem criada por IA viraliza ao criticar governo Lula e STF e atinge mais de 1 milhão de visualizações

2026-04-14
Perfil Brasil
Why's our monitor labelling this an incident or hazard?
The character Dona Maria is created entirely by AI and is used to disseminate politically charged content that has gone viral, influencing public opinion and political debate. The article highlights concerns about the use of AI-generated content to mobilize unofficial campaigns and evade judicial restrictions, indicating a breach of legal frameworks related to elections. The AI system's use has directly led to significant social and political impact, fulfilling the criteria for harm to communities and violation of legal rights. Hence, this is an AI Incident rather than a mere hazard or complementary information.