AI-Generated Persona 'Jessica Foster' Deceives MAGA Supporters Online

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

An AI-generated persona named Jessica Foster, portrayed as a patriotic soldier and MAGA supporter, amassed over a million Instagram followers. The account used convincing fake images and videos to mislead audiences, spread political misinformation, and monetize followers, resulting in financial and social harm. The incident highlights AI's role in online deception.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves an AI system generating fake images and a persona that was used to spread political propaganda and misinformation, which misled over a million followers. This directly led to harm to communities by fostering deception and manipulation in the political discourse. The AI system's use in creating and sustaining this false narrative is central to the harm, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, as the misinformation influenced public perception and political messaging.[AI generated]
AI principles
Transparency & explainabilityDemocracy & human autonomy

Industries
Media, social platforms, and marketing

Affected stakeholders
ConsumersGeneral public

Harm types
Economic/PropertyPublic interest

Severity
AI incident

Business function:
Marketing and advertisement

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

MAGA has been swooning over an Army soldier and her pro-Trump message. She is AI

2026-03-20
The Independent
Why's our monitor labelling this an incident or hazard?
The event involves an AI system generating fake images and a persona that was used to spread political propaganda and misinformation, which misled over a million followers. This directly led to harm to communities by fostering deception and manipulation in the political discourse. The AI system's use in creating and sustaining this false narrative is central to the harm, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, as the misinformation influenced public perception and political messaging.
Thumbnail Image

Meet Jessica Foster: The viral AI fooling millions of MAGA fans

2026-03-17
Euronews English
Why's our monitor labelling this an incident or hazard?
An AI system (the generative AI creating the avatar Jessica Foster) is explicitly involved, producing a fake persona that misleads millions. The AI's use has directly led to harm by spreading deceptive content and potentially manipulating political views, as well as financial exploitation through adult content monetization. The violation of platform policies and the potential for propaganda further underline the harm caused. These factors meet the criteria for an AI Incident, as the AI system's use has directly led to harm to communities and violations of rights.
Thumbnail Image

Thousands have swooned over this MAGA dream girl. She's made with AI.

2026-03-20
The Philadelphia Inquirer
Why's our monitor labelling this an incident or hazard?
The article explicitly states that Jessica Foster is an AI-generated persona created by an AI image generator, which is used to spread deceptive content and political messaging. The AI system's use has directly led to misinformation and manipulation of public perception, which harms communities by spreading false narratives and potentially influencing political opinions. The harm is realized, not just potential, as the account has gained a large following and influenced many users. This fits the definition of an AI Incident because the AI system's use has directly led to harm to communities through misinformation and deception.
Thumbnail Image

MAGA Swoons Over AI Generated Dream Girl

2026-03-20
Taegan Goddard's Political Wire
Why's our monitor labelling this an incident or hazard?
The article highlights the use of AI to create a fictional persona that has attracted a large following, but it does not report any harm or potential harm caused by this AI-generated content. There is no indication of injury, rights violations, disruption, or other significant harms. Therefore, this event does not meet the criteria for an AI Incident or AI Hazard. It is best classified as Complementary Information as it provides context about AI-generated content and its social impact without describing harm.
Thumbnail Image

Thousands have swooned over this MAGA dream girl. She's made with AI.

2026-03-20
Anchorage Daily News
Why's our monitor labelling this an incident or hazard?
The article explicitly states that Jessica Foster is an AI-generated persona used to deceive and manipulate online audiences, gaining a large following under false pretenses. The AI system's outputs are used to spread political messaging and monetize attention, which has caused harm by misleading people and potentially enabling disinformation campaigns. This fits the definition of an AI Incident as the AI system's use has directly led to harm to communities through misinformation and deception. The presence of AI is clear, the harm is realized, and the event is not merely a potential risk or complementary information but a concrete case of harm caused by AI-generated content.
Thumbnail Image

Instagrammer Exposes AI-Generated 'Soldier' Account Duping MAGA Supporters Online

2026-03-20
The Inquisitr
Why's our monitor labelling this an incident or hazard?
The event involves an AI system generating a fake social media persona that misled people into giving money, which is a direct harm to individuals (financial harm) and communities (misinformation). The AI-generated content was central to the scam, and the harm has already occurred. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

MAGA Accused Of Swooning Over AI Generated 'Soldier' Jessica Foster

2026-03-20
The Inquisitr
Why's our monitor labelling this an incident or hazard?
The event involves an AI system generating realistic but fake images of a fictional soldier, which were used to create a deceptive social media account with over a million followers. The account influenced political discourse and spread misinformation, which is a clear harm to communities. The AI-generated content directly led to this harm by enabling the creation and spread of false narratives. Therefore, this qualifies as an AI Incident due to the realized harm caused by the AI system's use in misinformation and political manipulation.
Thumbnail Image

Thousands have swooned over this MAGA dream girl. She's made with AI.

2026-03-20
DNYUZ
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to generate fake images and videos that have directly led to harm by deceiving large online audiences, spreading misinformation, and potentially influencing political discourse. The AI-generated persona is used to manipulate public perception and monetize followers under false pretenses, which is a clear violation of rights and causes harm to communities. The article provides evidence of realized harm, not just potential harm, making this an AI Incident rather than a hazard or complementary information.