AI-Generated Avatars Spread Pro-Trump Disinformation Ahead of US Midterms

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Hyper-realistic AI-generated avatars, posing as fervent Trump supporters, have flooded social media platforms with partisan political messaging and disinformation ahead of the US midterm elections. This use of AI manipulates public opinion and threatens the integrity of democratic processes by spreading deceptive content to influence voters.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves AI systems explicitly generating hyper-realistic avatars that spread political messaging, which is a direct use of AI. The harm is realized as these AI-generated influencers are actively shaping public opinion and potentially distorting electoral outcomes, which qualifies as harm to communities and a violation of democratic rights. The article provides evidence of ongoing dissemination and influence, not just potential risk, thus meeting the criteria for an AI Incident rather than a hazard or complementary information. The involvement of AI in creating deceptive political influencers that manipulate public discourse is central to the harm described.[AI generated]
AI principles
Democracy & human autonomyTransparency & explainability

Industries
Media, social platforms, and marketing

Affected stakeholders
General publicGovernment

Harm types
Public interest

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Fervent and fake: High-glam AI avatars boost Trump ahead of midterms - The Economic Times

2026-05-10
Economic Times
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly generating hyper-realistic avatars that spread political messaging, which is a direct use of AI. The harm is realized as these AI-generated influencers are actively shaping public opinion and potentially distorting electoral outcomes, which qualifies as harm to communities and a violation of democratic rights. The article provides evidence of ongoing dissemination and influence, not just potential risk, thus meeting the criteria for an AI Incident rather than a hazard or complementary information. The involvement of AI in creating deceptive political influencers that manipulate public discourse is central to the harm described.
Thumbnail Image

Fervent and fake: High-glam AI avatars boost Trump ahead of midterms

2026-05-10
The Straits Times
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated avatars used to flood social media with political messaging, including false claims and partisan propaganda. The AI systems' outputs are directly linked to the dissemination of disinformation that can harm communities by distorting political discourse and influencing elections. The involvement of AI in creating these synthetic influencers and their active role in spreading harmful content meets the definition of an AI Incident, as the harm to communities is occurring and the AI system's use is pivotal in causing this harm.
Thumbnail Image

Blonde, fervent and fake: Can AI-generated Trump fans boost him in the midterms?

2026-05-10
France 24
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly generating fake political influencers and content that manipulate public opinion and spread disinformation. This use of AI directly leads to harm to communities by undermining democratic processes and the integrity of elections, which fits the definition of an AI Incident. The article reports that these AI-generated influencers are actively present and influencing political discourse, not merely posing a potential future risk. Hence, it is not an AI Hazard or Complementary Information. The harm is clearly articulated and linked to the AI system's use, fulfilling the criteria for an AI Incident.
Thumbnail Image

Fervent and fake: High-glam AI avatars boost Trump ahead of midterms

2026-05-10
RTL Today
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly generating hyper-realistic avatars and political content used to influence voters, which is a direct use of AI. The harm is realized as these AI influencers are actively spreading political messaging that can distort democratic processes and manipulate public opinion, constituting harm to communities. The article documents ongoing activity and impact, not just potential risk, fulfilling the criteria for an AI Incident. The involvement of AI in creating deceptive, influential content that affects political outcomes is a clear example of AI-driven harm.
Thumbnail Image

High-glam AI avatars boost Trump ahead of midterms | New Straits Times

2026-05-11
New Straits Times Online
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (AI-generated avatars) to spread political messaging and disinformation, which directly harms communities by influencing elections and public opinion through deceptive means. This fits the definition of an AI Incident because the AI system's use has directly led to harm to communities (harm category d). The article describes realized harm through the active dissemination of misleading political content, not just potential harm, so it is not merely an AI Hazard. It is not Complementary Information because the main focus is on the ongoing use and impact of these AI avatars, not on responses or updates. Therefore, the classification is AI Incident.
Thumbnail Image

Fervor e falsidade: avatares de IA ultraglamorosos apoiam Trump antes das eleições legislativas

2026-05-10
Istoe dinheiro
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly generating political influencer avatars that actively spread fervent political messages and potentially misinformation. This use of AI directly leads to harm to communities by manipulating public opinion and possibly affecting electoral outcomes, which fits the definition of an AI Incident. The harm is realized, not just potential, as these AI-generated influencers are already active on social media platforms. The article also references prior instances and research tracking such AI-generated political content, reinforcing the ongoing nature of the harm.
Thumbnail Image

Fervor e falsidade: avatares de IA ultraglamorosos apoiam Trump antes das eleições legislativas

2026-05-10
O Povo
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating realistic avatars and content used for political messaging. While the article does not explicitly state that harm has occurred, the use of AI-generated avatars to spread political propaganda can plausibly lead to harm to communities by influencing elections and spreading misinformation. Therefore, this situation represents a plausible risk of harm rather than a realized harm incident.
Thumbnail Image

Influenciadores gerados por IA inundam redes nos EUA com mensagens políticas fervorosas em apoio a Trump

2026-05-10
Correio do povo
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly generating political influencer avatars and content that actively disseminate misleading and manipulative political messages. The harm is realized as these AI-generated influencers are already flooding social media with political propaganda, which can distort public opinion and election outcomes, thus harming communities and potentially violating democratic rights. The article provides evidence of ongoing use and impact, not just potential future harm, meeting the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Avatares de IA ultraglamorosos apoiam Trump antes das eleições legislativas

2026-05-10
ISTOÉ Independente
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly, as it discusses AI-generated avatars used to produce and spread political content. The use of these AI influencers has directly led to the dissemination of misleading political messages and disinformation, which harms communities by manipulating public opinion and potentially affecting election outcomes. The article also references prior instances where AI avatars spread unfounded accusations, reinforcing the presence of realized harm. Hence, the event meets the criteria for an AI Incident because the AI system's use has directly caused harm to communities through political misinformation.
Thumbnail Image

Fervor e falsidade: avatares de IA ultraglamorosos apoiam Trump antes das eleições legislativas

2026-05-10
UOL notícias
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating realistic avatars that spread political messages and unverified claims, which can influence public opinion and election integrity. This use of AI directly relates to the dissemination of misinformation and manipulation of political discourse, which can harm communities and democratic rights. While the article does not confirm actual harm has occurred, the plausible risk of significant societal harm through election interference and misinformation is evident. Therefore, this qualifies as an AI Hazard, as the AI system's use could plausibly lead to harm, but no confirmed incident of harm is reported yet.