AI-Generated Persona 'Jessica Foster' Deceives MAGA Supporters Online

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

An AI-generated persona named Jessica Foster, portrayed as a patriotic soldier and MAGA supporter, amassed over a million Instagram followers. The account used convincing fake images and videos to mislead audiences, spread political misinformation, and monetize followers, resulting in financial and social harm. The incident highlights AI's role in online deception.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves an AI system generating fake images and a persona that was used to spread political propaganda and misinformation, which misled over a million followers. This directly led to harm to communities by fostering deception and manipulation in the political discourse. The AI system's use in creating and sustaining this false narrative is central to the harm, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, as the misinformation influenced public perception and political messaging.[AI generated]
AI principles
Transparency & explainabilityDemocracy & human autonomy

Industries
Media, social platforms, and marketing

Affected stakeholders
ConsumersGeneral public

Harm types
Economic/PropertyPublic interest

Severity
AI incident

Business function:
Marketing and advertisement

AI system task:
Content generation

In other databases

Articles about this incident or hazard

Thumbnail Image

MAGA has been swooning over an Army soldier and her pro-Trump message. She is AI

2026-03-20
The Independent
Why's our monitor labelling this an incident or hazard?
The event involves an AI system generating fake images and a persona that was used to spread political propaganda and misinformation, which misled over a million followers. This directly led to harm to communities by fostering deception and manipulation in the political discourse. The AI system's use in creating and sustaining this false narrative is central to the harm, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, as the misinformation influenced public perception and political messaging.
Thumbnail Image

Meet Jessica Foster: The viral AI fooling millions of MAGA fans

2026-03-17
Euronews English
Why's our monitor labelling this an incident or hazard?
An AI system (the generative AI creating the avatar Jessica Foster) is explicitly involved, producing a fake persona that misleads millions. The AI's use has directly led to harm by spreading deceptive content and potentially manipulating political views, as well as financial exploitation through adult content monetization. The violation of platform policies and the potential for propaganda further underline the harm caused. These factors meet the criteria for an AI Incident, as the AI system's use has directly led to harm to communities and violations of rights.
Thumbnail Image

Thousands have swooned over this MAGA dream girl. She's made with AI.

2026-03-20
The Philadelphia Inquirer
Why's our monitor labelling this an incident or hazard?
The article explicitly states that Jessica Foster is an AI-generated persona created by an AI image generator, which is used to spread deceptive content and political messaging. The AI system's use has directly led to misinformation and manipulation of public perception, which harms communities by spreading false narratives and potentially influencing political opinions. The harm is realized, not just potential, as the account has gained a large following and influenced many users. This fits the definition of an AI Incident because the AI system's use has directly led to harm to communities through misinformation and deception.
Thumbnail Image

MAGA Swoons Over AI Generated Dream Girl

2026-03-20
Taegan Goddard's Political Wire
Why's our monitor labelling this an incident or hazard?
The article highlights the use of AI to create a fictional persona that has attracted a large following, but it does not report any harm or potential harm caused by this AI-generated content. There is no indication of injury, rights violations, disruption, or other significant harms. Therefore, this event does not meet the criteria for an AI Incident or AI Hazard. It is best classified as Complementary Information as it provides context about AI-generated content and its social impact without describing harm.
Thumbnail Image

Thousands have swooned over this MAGA dream girl. She's made with AI.

2026-03-20
Anchorage Daily News
Why's our monitor labelling this an incident or hazard?
The article explicitly states that Jessica Foster is an AI-generated persona used to deceive and manipulate online audiences, gaining a large following under false pretenses. The AI system's outputs are used to spread political messaging and monetize attention, which has caused harm by misleading people and potentially enabling disinformation campaigns. This fits the definition of an AI Incident as the AI system's use has directly led to harm to communities through misinformation and deception. The presence of AI is clear, the harm is realized, and the event is not merely a potential risk or complementary information but a concrete case of harm caused by AI-generated content.
Thumbnail Image

Instagrammer Exposes AI-Generated 'Soldier' Account Duping MAGA Supporters Online

2026-03-20
The Inquisitr
Why's our monitor labelling this an incident or hazard?
The event involves an AI system generating a fake social media persona that misled people into giving money, which is a direct harm to individuals (financial harm) and communities (misinformation). The AI-generated content was central to the scam, and the harm has already occurred. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

MAGA Accused Of Swooning Over AI Generated 'Soldier' Jessica Foster

2026-03-20
The Inquisitr
Why's our monitor labelling this an incident or hazard?
The event involves an AI system generating realistic but fake images of a fictional soldier, which were used to create a deceptive social media account with over a million followers. The account influenced political discourse and spread misinformation, which is a clear harm to communities. The AI-generated content directly led to this harm by enabling the creation and spread of false narratives. Therefore, this qualifies as an AI Incident due to the realized harm caused by the AI system's use in misinformation and political manipulation.
Thumbnail Image

Thousands have swooned over this MAGA dream girl. She's made with AI.

2026-03-20
DNYUZ
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to generate fake images and videos that have directly led to harm by deceiving large online audiences, spreading misinformation, and potentially influencing political discourse. The AI-generated persona is used to manipulate public perception and monetize followers under false pretenses, which is a clear violation of rights and causes harm to communities. The article provides evidence of realized harm, not just potential harm, making this an AI Incident rather than a hazard or complementary information.
Thumbnail Image

A Blonde US Army Soldier Posed With Trump, Putin And Zelensky. She Was AI

2026-03-21
NDTV
Why's our monitor labelling this an incident or hazard?
The event involves an AI system generating realistic fake images and personas that have been used to deceive a large audience, leading to misinformation and manipulation. The harm is realized as followers were misled, which impacts communities and public discourse. The AI system's use in creating and spreading false content directly caused this harm. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Who is Jessica Foster? US soldier who posed next to Donald Trump, Vladimir Putin 'disappears' from social media- Moneycontrol.com

2026-03-21
MoneyControl
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly used to generate realistic images of a fictitious person, which were then used to create a deceptive social media persona. The AI-generated content directly led to harm by misleading over a million followers, spreading false information, and manipulating political sentiments, which harms communities and violates trust. The account's removal and the emergence of successor accounts show ongoing misuse. This fits the definition of an AI Incident because the AI system's use directly caused harm to communities through misinformation and deception.
Thumbnail Image

Thousands have swooned over this MAGA dream girl. She's made with AI.

2026-03-21
Yahoo
Why's our monitor labelling this an incident or hazard?
The article explicitly states that the persona is AI-generated and used to deceive and manipulate audiences politically, which is a direct harm to communities by spreading misinformation and potentially influencing political views. The AI system's use in generating and sustaining this fake persona is central to the harm. The harm is realized, not just potential, as thousands of users have been deceived and engaged with the fake account. This fits the definition of an AI Incident due to violation of rights (truthful information) and harm to communities through misinformation and political manipulation.
Thumbnail Image

Viral US Soldier Jessica Foster Seen With Trump Does Not Exist; Account Removed From Instagram

2026-03-21
thedailyjagran.com
Why's our monitor labelling this an incident or hazard?
An AI system was used to create realistic images and videos of a non-existent person, which were disseminated widely on social media. This constitutes the use of AI to generate deceptive content that misled the public. The removal of the account indicates recognition of the issue. The event involves AI-generated misinformation that has already caused social harm by misleading a large audience, thus qualifying as an AI Incident due to harm to communities through misinformation and deception.
Thumbnail Image

The Internet is all about "military beauties" -- but Jessica Foster isn't real - THE LOCAL REPORT ARTICLES

2026-03-21
THE LOCAL REPORT ARTICLES
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to generate fake but realistic images and videos of a non-existent military figure, which are then disseminated widely online. This has directly led to harm by misleading large audiences, spreading disinformation, and potentially influencing political opinions and social trust. The AI-generated persona is used to attract followers and monetize through explicit content, indicating intentional misuse of AI for deceptive and manipulative purposes. The harm to communities through misinformation and erosion of trust in authentic information is clear and ongoing, meeting the definition of an AI Incident.
Thumbnail Image

Who is Jessica Foster, the curious case of MAGA dream girl? US Army blonde woman with viral Instagram account identified

2026-03-21
Economic Times
Why's our monitor labelling this an incident or hazard?
The event involves an AI system generating realistic but fake social media content that misleads users about the identity and background of 'Jessica Foster.' The AI-generated content includes fabricated military and political imagery, which has caused misinformation and confusion among the public. The account's use to attract followers and direct them to paid platforms further demonstrates misuse of AI-generated personas for financial gain and political messaging. These factors align with the definition of an AI Incident, as the AI system's use has directly led to harm to communities (misinformation) and violations of rights (deceptive practices).
Thumbnail Image

Thousands have swooned over this MAGA dream girl. She's made with AI

2026-03-23
NZ Herald
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems generating fake images and videos of a fictional pro-Trump woman, which are used to deceive audiences and spread political content. This deception has already caused harm by misleading thousands of users, constituting harm to communities and a violation of informational integrity. The AI system's use in this context is central to the harm, fulfilling the criteria for an AI Incident. The event is not merely a potential risk but an ongoing realized harm through misinformation and manipulation.
Thumbnail Image

Thousands have swooned over this MAGA dream girl. She's made with AI.

2026-03-24
Democratic Underground
Why's our monitor labelling this an incident or hazard?
The article explicitly states that Jessica Foster is an AI-generated fictional persona, which involves an AI system (image generator). However, the event does not describe any direct or indirect harm caused by this AI system, such as misinformation leading to harm or rights violations. The presence of a fake persona on social media is a known phenomenon and can be concerning, but without evidence of harm or plausible imminent harm, it does not meet the threshold for an AI Incident or AI Hazard. Instead, it informs about the use of AI in creating deceptive content, which is valuable complementary information for understanding AI's societal impact.
Thumbnail Image

MAGA has been swooning over a beautiful Army soldier and her pro-Trump message. She is AI

2026-03-22
Yahoo
Why's our monitor labelling this an incident or hazard?
The article explicitly states that the AI-generated persona 'Jessica Foster' was used to push a political agenda, gaining over a million followers and spreading misinformation. The AI system's outputs (deepfake images and fabricated identity) directly caused harm by deceiving users and enabling propaganda dissemination. This fits the definition of an AI Incident because the AI system's use has directly led to harm to communities through misinformation and manipulation.
Thumbnail Image

MAGA Men Are Being Fooled By AI-Generated MAGA Military Beauty

2026-03-23
News One
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated images and personas being used to deceive social media users, particularly within a political context. The harm includes misleading people, spreading false narratives, and manipulating political opinions, which constitutes harm to communities and a violation of trust. Since the AI system's use has directly led to these harms, this qualifies as an AI Incident under the framework.
Thumbnail Image

MAGAs (and Maybe Hegseth?) Swoon Over 'Army Babe' Who Was AI Created

2026-03-24
PolitiZoom
Why's our monitor labelling this an incident or hazard?
An AI system (image generator) was used to create a completely fake persona that amassed a large following and led to users paying money for images, constituting financial harm and deception. This meets the criteria for an AI Incident because the AI system's use directly led to harm (financial exploitation and misinformation). The event is not merely a potential risk or a general AI-related news item but involves realized harm caused by the AI system's outputs.
Thumbnail Image

Meet the AI "soldier" cashing in big by selling feet pics to lovestruck MAGA men

2026-03-25
Queerty
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system generating deepfake images and content that was used to deceive a large audience, leading to financial harm and misinformation. The AI system's use directly caused harm to individuals (financial exploitation) and communities (spread of misinformation and deception). The harm is realized, not just potential, and the AI system's role is pivotal in creating the fake persona and content. Hence, this is classified as an AI Incident.
Thumbnail Image

AI 'Military Influencer' Jessica Foster Exposed with 1M Followers

2026-03-25
Chosun.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system generating a virtual persona that was used to deceive millions of followers, spreading false impressions and potentially political disinformation. The AI system's use directly caused harm by misleading the public and enabling disinformation campaigns, which is a clear harm to communities. The account's role in generating revenue through adult content linked to the AI persona further indicates misuse. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Saucy soldier who cashed in by selling feet pics to MAGA men has AI secret

2026-03-25
Daily Star
Why's our monitor labelling this an incident or hazard?
The article explicitly states that Jessica Foster was an AI-generated persona using deepfake technology to create fake images and social media content. Fans paid for AI-generated feet pictures, constituting financial harm and deception. The AI system's use in this fraudulent scheme directly caused harm to individuals (financial loss) and communities (misinformation and manipulation). Hence, this qualifies as an AI Incident due to realized harm caused by the AI system's use.
Thumbnail Image

She Has 1 Million Followers and Photos with Trump -- But She's AI

2026-03-25
Playboy Digital
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system generating a realistic persona that deceived many users, indicating AI system involvement. However, the harms described are primarily social and ethical concerns about deception and exploitation rather than direct or indirect harms such as injury, rights violations, or disruption. There is no indication of realized or plausible future harm meeting the AI Incident or AI Hazard definitions. The event mainly informs about the phenomenon and its societal implications, fitting the definition of Complementary Information, which includes updates and context about AI's impact without reporting a new incident or hazard.
Thumbnail Image

美國甜美女大兵爆紅! 川普迷揭露「恐怖真相」

2026-03-22
中時新聞網
Why's our monitor labelling this an incident or hazard?
An AI system was used to create a realistic but fake persona that deceived many people, including political supporters, leading to misinformation and potential political manipulation. This deception harms communities by spreading false information and undermining trust. The AI-generated content also violates platform policies and misleads users, which is a breach of obligations under applicable law and platform rules. Therefore, this event qualifies as an AI Incident due to realized harm caused by the AI system's use.
Thumbnail Image

黑絲美女軍官「Jessica」是誰?與川普合照全美瘋傳 真相尷尬了 | 國際萬花筒 | 全球 | NOWnews今日新聞

2026-03-23
NOWnews 今日新聞
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate realistic fake images and persona, which were widely disseminated and believed by many, constituting misinformation and deception. This fits the definition of an AI Hazard because the AI-generated content could plausibly lead to harm such as misinformation, manipulation, or exploitation of followers. There is no clear evidence that actual harm (such as injury, rights violations, or disruption) has occurred yet, so it does not qualify as an AI Incident. The event is more than general AI-related news because it involves a specific AI-generated persona causing public confusion and potential harm. Therefore, the classification is AI Hazard.
Thumbnail Image

MAGA女兵辣妹其實是AI假人 傻蛋直男照樣埋單 - 國際 - 自由時報電子報

2026-03-22
Liberty Times Net
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI-generated images and videos to create a fake social media influencer persona that deceives users into believing she is a real person. This deception has led to financial harm (users paying for content) and manipulation of political opinions, which constitutes harm to communities. The AI system's use is central to the harm, fulfilling the criteria for an AI Incident. The article documents actual harm occurring, not just potential harm, so it is not an AI Hazard or Complementary Information.
Thumbnail Image

美國女兵爆紅與特朗普合照 被揭為AI假人 | am730

2026-03-23
am730
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate a fake persona and images that misled a large audience, including political supporters, which constitutes harm to communities through deception and manipulation. The AI-generated content was used to direct followers to paid adult content, indicating misuse of AI for financial exploitation and possible political influence. Since the AI system's use directly led to these harms, this qualifies as an AI Incident under the framework definitions.
Thumbnail Image

美國正妹女大兵爆紅!傳與川普合照 專家揭真相:AI假人│TVBS新聞網

2026-03-23
TVBS
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system to generate a fake social media persona that has gained significant attention and followers. While the article does not report direct harm occurring yet, experts warn about the plausible future misuse of such AI-generated accounts for spreading political propaganda and misinformation, which could harm communities and democratic processes. Therefore, this situation fits the definition of an AI Hazard, as the AI system's use could plausibly lead to significant harm, even though no incident has yet materialized.
Thumbnail Image

AI 合成「美軍女神」四個月吸粉百萬,外媒揭成人內容變現黑幕

2026-03-24
TechNews 科技新報
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system generating synthetic media to create a fake persona, which has been used to deceive a large audience and monetize through adult content platforms. The harm includes misinformation, manipulation of public opinion, and potential use as an information warfare tool, which are harms to communities and societal trust. The AI system's use directly led to these harms, fulfilling the criteria for an AI Incident. The article also notes official actions taken to remove the accounts, confirming the harm has materialized.
Thumbnail Image

رفقة ترامب وبوتين.. حقيقة صادمة لجندية أميركية خطفت الملايين

2026-03-22
سكاي نيوز عربية
Why's our monitor labelling this an incident or hazard?
The event involves an AI system used to generate realistic fake images and videos (deepfake technology) to create a false persona that has gained significant social media influence. This has directly led to harm by misleading a large audience, spreading political propaganda, and potentially influencing public opinion under false pretenses. The harm is realized and ongoing, fitting the definition of an AI Incident due to violations of rights and harm to communities through misinformation and deception. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

جندية أمريكية تثير الجدل بصور مع ترامب وبوتين وزيلينسكي

2026-03-22
صدى البلد
Why's our monitor labelling this an incident or hazard?
The event involves an AI system generating realistic fake images and videos to create a false persona that misleads a large audience. This use of AI has directly led to harm in the form of misinformation and manipulation of public perception, which constitutes harm to communities. Therefore, this qualifies as an AI Incident because the AI system's use has directly caused significant harm through deception and misinformation.
Thumbnail Image

ما قصة الضابطة المزعومة التي ظهرت مع ترامب وبوتين وأسرت الملايين؟ - الشروق أونلاين

2026-03-22
الشروق أونلاين
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved as the persona is generated entirely by AI, including images and videos placing the character in false contexts with real-world figures. The use of this AI-generated persona has directly led to harm by misleading millions of followers, constituting misinformation and manipulation of public opinion, which is a harm to communities. Therefore, this qualifies as an AI Incident under the framework because the AI system's use has directly led to significant harm through deception and potential political influence.
Thumbnail Image

رافقت قادة العالم وخدعت الملايين.. القصة الصادمة لجندية أميركية

2026-03-23
قناه السومرية العراقية
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly involved in generating a fake social media persona with realistic images and content, which directly led to widespread misinformation and deception of over a million followers. This constitutes harm to communities by spreading false narratives and manipulating public opinion. Therefore, this event qualifies as an AI Incident because the AI system's use directly caused significant harm through misinformation and social manipulation.
Thumbnail Image

رافقت قادة العالم وخدعت الملايين.. القصة الصادمة لجندية أميركية - شبكة لالش الاعلامية

2026-03-23
شبكة لالش الاعلامية
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate realistic images and create a fake identity that influenced a large audience, which constitutes harm to communities through misinformation and deception. The AI-generated persona was deliberately used to mislead and manipulate, fulfilling the criteria for an AI Incident due to the realized harm of deception and potential political manipulation.
Thumbnail Image

شخصية عسكرية وهمية بالذكاء الاصطناعي تخدع الملايين وتظهر مع ترامب وبوتين

2026-03-22
Arabstoday
Why's our monitor labelling this an incident or hazard?
The event involves an AI system that generates realistic fake images and videos (deepfakes) to create a fictional persona that has attracted a large following and spread misleading political content. This has directly led to harm by deceiving the public and potentially manipulating opinions, which qualifies as harm to communities and a violation of rights. Therefore, this is an AI Incident because the AI system's use has directly led to significant harm through misinformation and deception.
Thumbnail Image

مليون متابع... ثم يتلاشى: من تكون (جيسيكا فوستر)؟

2026-03-23
وكالة النبا
Why's our monitor labelling this an incident or hazard?
An AI system is clearly involved as the persona and its content are generated using AI (deepfake and digital generation technologies). The use of this AI-generated persona to spread misleading content and influence public opinion constitutes harm to communities by spreading misinformation and potentially manipulating political and social discourse. Since the harm is occurring (followers are being misled by fabricated content), this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Jessica Foster, la bella soldado con un millón de seguidores que volvió loco al público MAGA, en realidad no existe

2026-03-23
EL MUNDO
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate a fake social media persona with realistic images and content, which directly led to misinformation and deception of a large audience. This constitutes harm to communities through the spread of false information and manipulation of political sentiment, fulfilling the criteria for an AI Incident. The removal of the profile is a response to the incident but does not negate the harm caused while it was active.
Thumbnail Image

Millones están enamorados de esta soldado, que en realidad no existe

2026-03-23
Clarin
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating realistic fake personas and images that are used to influence social media audiences and political narratives. While this use of AI is deceptive and manipulative, the article does not document a specific realized harm such as injury, legal violation with consequences, or direct disruption. The potential for harm to communities through misinformation and manipulation is credible and recognized, but the article focuses on the ongoing use and spread of these AI-generated profiles rather than a concrete incident of harm. Therefore, this situation fits the definition of an AI Hazard, as the AI-generated content could plausibly lead to significant harm in the future, but no specific AI Incident is described.
Thumbnail Image

Jessica Foster, la influencer pro-Trump que nunca ha existido: una IA engaña a miles de seguidores - ElNacional.cat

2026-03-21
ElNacional.cat
Why's our monitor labelling this an incident or hazard?
The article explicitly states that Jessica Foster is an AI-generated avatar controlled by an anonymous creator, used to spread political propaganda and misinformation to a large audience. This use of AI directly caused harm by misleading people and influencing political opinions under false pretenses, which fits the definition of an AI Incident involving harm to communities. The harm is realized, not just potential, as the avatar accumulated over a million followers and actively engaged in spreading propaganda before being removed.
Thumbnail Image

Conoce a Jessica Foster: la IA viral que engaña a los fans de Trump

2026-03-20
Euronews Español
Why's our monitor labelling this an incident or hazard?
The article describes an AI system (an AI-generated avatar) actively used to mislead and manipulate a political audience, which has already caused harm by spreading false information and potentially influencing public opinion. The AI's role is pivotal in creating a convincing but fake persona that deceives followers and monetizes their engagement. The harm to communities through misinformation and manipulation, as well as violations of platform policies, meets the criteria for an AI Incident. The event is not merely a potential risk but an ongoing realized harm.
Thumbnail Image

El fenómeno viral Jessica Foster: soldado de Trump, influencer, y generada con IA

2026-03-22
Computer Hoy
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly used to generate a realistic fake persona that manipulated public opinion and directed users to potentially exploitative content. The harm is realized through deception, manipulation, and illegal impersonation, affecting communities and violating platform rules and possibly legal frameworks. The AI's role is pivotal in creating the false persona and enabling the manipulation. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

트럼프·메시 옆에서 셀카...'팔로워 100만명' 여군 인플루언서의 정체

2026-03-25
Chosun.com
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly involved in generating a virtual persona that impersonated a real person and posted fabricated images, which misled the public and spread false information. This constitutes a violation of rights and harm to communities through misinformation and deception. The AI system's use directly led to these harms, fulfilling the criteria for an AI Incident. The article describes realized harm rather than potential harm, so it is not an AI Hazard. It is not merely complementary information because the main focus is on the harm caused by the AI-generated persona, not on responses or updates.
Thumbnail Image

트럼프와 활주로 걷고 백악관서 '찰칵'...100만명 홀린 미모의 여군 정체 - 매일경제

2026-03-25
mk.co.kr
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly involved in generating fake images and a fictitious persona, which directly led to misinformation and deception of a large online community, causing harm to the community's trust and potentially enabling further misuse such as propaganda or disinformation. This constitutes harm to communities and a violation of trust, fitting the definition of an AI Incident. The article reports realized harm through the spread of false information and manipulation of public perception via AI-generated content.
Thumbnail Image

트럼프도 '따봉' 날린 '미모의 여군'..순식간에 100만 팔로워 달성한 女, '반전' 정체

2026-03-25
First-Class 경제신문 파이낸셜뉴스
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly involved in generating realistic fake images and videos that misled the public, causing misinformation and potential harm to communities by spreading false political and military narratives. The misuse of AI-generated content for impersonation and political messaging constitutes a violation of rights and harms communities through misinformation. Since the harm (misinformation, deception, potential political manipulation) is occurring and the AI system's role is pivotal, this qualifies as an AI Incident.
Thumbnail Image

트럼프와 나란히 걷고 메시와 셀카...100만 팔로워 '여군 인플루언서' 정체는

2026-03-24
연합뉴스TV
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to generate realistic fake images and videos that mislead a large online audience, causing harm through misinformation and potential political manipulation. This directly relates to harm to communities and the spread of false information, fulfilling the criteria for an AI Incident. The AI system's development and use have directly led to these harms, as the fake persona and content are AI-generated and used to deceive and manipulate users.
Thumbnail Image

"트럼프 옆 금발 여성" 군복 입고 100만 남성 홀린 'MAGA 그녀' 정체 보니

2026-03-25
아시아경제
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly, as the influencer accounts are AI-generated personas. The harm arises from the use of these AI systems to deceive users, spread political propaganda, and combine it with adult content for profit, which misleads and harms communities and violates rights. This is a direct harm caused by the AI system's use. The article documents realized harm (deception, misinformation, political manipulation) rather than just potential harm. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to harm to communities and violations of rights.
Thumbnail Image

도널드 트럼프 옆 '여군 인플루언서'...정체 밝혀졌다

2026-03-25
YTN
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated deepfake content and AI influencer accounts that have misled many users, spreading false political narratives and fake personas. The harm is realized as misinformation and potential manipulation of public opinion, which harms communities and political processes. The AI system's use in creating and spreading these false images and videos is central to the incident. The deletion of accounts for policy violations confirms the recognition of harm. Hence, this is an AI Incident involving direct harm caused by AI-generated disinformation.
Thumbnail Image

트럼프 '엄지 척'·메시와 '찰칵'...100만명 홀린 '이 여군' 누군가 봤더니

2026-03-25
서울신문
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate realistic but fake images and personas, which were then deployed on social media to mislead and manipulate users, causing harm through deception, misinformation, and potential political disinformation. This constitutes a violation of rights related to truthful information and harms communities by spreading false narratives and enabling fraudulent monetization schemes. The harm is realized as the fake persona attracted a large following and was used for deceptive purposes. Therefore, this event qualifies as an AI Incident due to the direct harm caused by the AI-generated content and its misuse.
Thumbnail Image

트럼프 옆 '100만 팔로워' 여군, 알고 보니...

2026-03-25
데일리안
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly used to create a fake individual with realistic images and social media presence, which was then used to mislead the public and generate profit through adult content services. The event involves the use of AI-generated content to spread misinformation and potentially political manipulation, which is a violation of rights and harms communities. Since the harm (misinformation, deception, potential political misuse) is occurring and directly linked to the AI system's use, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

트럼프 옆 100만 팔로워 금발 女미군 정체

2026-03-26
문화일보
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate a realistic but fictitious image of a military woman, which was then used in a deceptive social media campaign causing harm by misleading the public and potentially spreading misinformation. The harm includes violation of trust, misinformation, and potential political manipulation, which constitute harm to communities and breach of rights. The account's removal indicates the harm was realized. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI-generated content and its misuse.