AI-Generated Influencer 'Emily Hart' Used to Scam MAGA Supporters

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A 22-year-old Indian medical student used Google's Gemini AI to create a fake influencer persona, 'Emily Hart,' targeting American MAGA supporters with AI-generated images and content. The account amassed thousands of followers and generated significant income through subscriptions and merchandise before being banned for fraudulent activity, causing financial and social harm.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves the use of generative AI systems to create a fake influencer persona that deceived users and generated income through fraudulent means. The AI system's outputs were central to the deception and monetization, directly causing harm to the users who were misled and financially exploited. The account was removed for fraudulent activity, confirming the harm occurred. This fits the definition of an AI Incident as the AI system's use directly led to harm (financial and trust-related) to groups of people.[AI generated]
AI principles
AccountabilityTransparency & explainability

Industries
Media, social platforms, and marketing

Affected stakeholders
Consumers

Harm types
Economic/PropertyPsychological

Severity
AI incident

Business function:
Marketing and advertisement

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

What is Emily Hart AI scam? How a fake MAGA influencer made thousands of dollars

2026-04-21
Hindustan Times
Why's our monitor labelling this an incident or hazard?
The event involves the use of generative AI systems to create a fake influencer persona that deceived users and generated income through fraudulent means. The AI system's outputs were central to the deception and monetization, directly causing harm to the users who were misled and financially exploited. The account was removed for fraudulent activity, confirming the harm occurred. This fits the definition of an AI Incident as the AI system's use directly led to harm (financial and trust-related) to groups of people.
Thumbnail Image

Indian Student Dupes 'Super Dumb' MAGA Men With AI Model, Makes Thousands Of Dollars

2026-04-22
News18
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system (generative AI tools and custom ChatGPT) used to create a fake influencer persona. The use of AI directly led to harm by deceiving and manipulating a political community, which is a harm to communities and a breach of trust and transparency. The monetization through paid content based on this deception further compounds the harm. The account's removal for fraudulent activity confirms the harm was realized. Therefore, this qualifies as an AI Incident due to direct harm caused by the AI system's use in generating misleading content and influencing political discourse.
Thumbnail Image

Top MAGA influencer revealed to be AI -- created by a guy in India who made a mint off lonely men online

2026-04-21
New York Post
Why's our monitor labelling this an incident or hazard?
The event clearly involves AI systems used to create and maintain a fake influencer persona that deceived millions of followers, leading to financial exploitation and misinformation. The AI's role was pivotal in generating realistic images and content that misled users. The harm includes deception of social media users, potential manipulation of political opinions, and violation of platform rules, which aligns with harm to communities and breach of obligations under applicable law (fraudulent activity). Since the harm has occurred and the AI system's involvement is direct and central, this is classified as an AI Incident.
Thumbnail Image

Top MAGA influencer revealed to be AI created by man in India

2026-04-22
News.com.au
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used to generate a fake influencer persona that deceived millions of followers and generated income through AI-generated content. The AI system's use directly led to harm by spreading misinformation and fraudulent activity on social media, which harms communities and violates platform rules. The removal of the account for fraudulent activity confirms the harm was realized. The AI system's role was pivotal in creating and sustaining the fake persona, making this an AI Incident rather than a hazard or complementary information. The event is not unrelated as it clearly involves AI and its misuse leading to harm.
Thumbnail Image

How 22-year-old Indian medical student fooled millions of MAGA followers with fake AI influencer, funded tuition

2026-04-22
The Financial Express
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system used to create a fake influencer persona that deceived millions, leading to financial harm and manipulation of political beliefs. The AI system's outputs were central to the incident, and the harm is realized, not just potential. The fraudulent activity led to platform enforcement actions, confirming the incident's severity. This fits the definition of an AI Incident as the AI system's use directly led to harm to communities and violations of trust and rights.
Thumbnail Image

'Emily Hart' unmasked: Indian man used fake AI persona to con 'super dumb' MAGA fans, fund medical school

2026-04-22
The Times of India
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system (Gemini) to create a fake influencer persona and generate content that was used to scam people out of money. This constitutes direct harm to individuals (financial harm) and harm to communities (misinformation and deception within a political group). The AI system's use was central to the incident, as it enabled the creation of realistic fake images and persona content that deceived users. Therefore, this qualifies as an AI Incident due to realized harm caused by the AI system's use.
Thumbnail Image

'MAGA is super dumb': Indian man scams Republicans with AI-generated model, makes thousands of dollars

2026-04-21
The Times of India
Why's our monitor labelling this an incident or hazard?
The event describes the use of AI-generated content to create a fake influencer persona that spread politically biased and misleading messages, which manipulated social media users and generated financial profit through deceptive means. The AI system's use directly contributed to the creation and dissemination of this misleading content and the fraudulent monetization scheme. This constitutes harm to communities through misinformation and fraudulent activity, fitting the definition of an AI Incident.
Thumbnail Image

AI-generated MAGA influencer: Indian student behind 'hot girl' profile with millions of followers

2026-04-22
The Times of India
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system used to create a virtual influencer persona that spread politically charged and potentially misleading content to a large audience, influencing social and political discourse. The AI-generated persona was used to manipulate social media users, which can be considered harm to communities and a violation of rights related to truthful information and political participation. The account's removal for fraudulent activity confirms the harm was realized. Therefore, this qualifies as an AI Incident due to the direct role of the AI system in causing harm through misinformation and social manipulation.
Thumbnail Image

Scammer Dupes 'Dumb' MAGA Men With AI Model

2026-04-21
The Daily Beast
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system to generate content that intentionally deceived and manipulated a group of people, leading to realized harm through misinformation and social manipulation. The AI system's outputs were central to the incident, and the harm is direct and materialized, as evidenced by the account's banning for fraud. This fits the definition of an AI Incident due to violations of rights related to misinformation and harm to communities.
Thumbnail Image

There's something about Emily: MAGA babe revealed to be scam artist

2026-04-21
Metro
Why's our monitor labelling this an incident or hazard?
The event describes AI-generated personas ('Emily Hart' and 'Jessica Foster') used to mislead and manipulate a large audience with false identities and divisive political messaging. This deception caused harm to communities by spreading misinformation and exploiting followers financially. The AI system's development and use directly led to these harms, qualifying this as an AI Incident under the framework's criteria for harm to communities and violations of trust through AI-generated content.
Thumbnail Image

This Scammer Used an AI-Generated MAGA Girl to Grift 'Super Dumb' Men

2026-04-21
Wired
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system to generate synthetic content and personas that are deliberately designed to deceive and financially exploit people, which is a clear violation of ethical norms and can be considered a breach of rights (including potential fraud and manipulation). The AI system's use directly led to harm by enabling the scammer to grift money from unsuspecting individuals. Therefore, this qualifies as an AI Incident due to realized harm caused by the AI system's use in the scam.
Thumbnail Image

'MAGA Followers Are Super Dumb': Indian Medical Student Builds Fake AI Influencer Emily Hart To Lure Older US Men, Earns Thousands To Fund Education

2026-04-22
Free Press Journal
Why's our monitor labelling this an incident or hazard?
The event explicitly describes the use of AI-generated images and content to create a fictitious influencer persona that deceived thousands of followers, leading to financial harm (monetary loss to subscribers) and emotional harm (exploitation of loneliness and political paranoia). The AI system's outputs were central to the deception and monetization scheme. This fits the definition of an AI Incident because the AI system's use directly led to harm to people (financial and emotional harm) and harm to communities (manipulation and misinformation).
Thumbnail Image

How An Indian Student Fooled Thousands Of Pro-Trump MAGA Followers Using AI Influencer 'Emily Hart', Earned Huge Money

2026-04-22
NewsX
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly used to create and operate a fictitious influencer persona that deceived thousands of followers, leading to financial gain and manipulation of political audiences. The harm includes violation of trust, misinformation, and potential social harm to the targeted community. The fraudulent nature of the account and its removal by Instagram further confirm the realized harm. Hence, this event meets the criteria for an AI Incident as the AI system's use directly led to harm to communities through deception and misinformation.
Thumbnail Image

Scammer used fake woman to grift 'super dumb' MAGA men

2026-04-22
Alternet.org
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system used in the creation of fake images and persona for scamming purposes. The harm is realized as financial loss to the targeted individuals, which constitutes harm to people. The AI system's use is central to the scam's success, making this an AI Incident due to direct harm caused by the AI-generated deceptive content.
Thumbnail Image

This Scammer Used an AI-Generated MAGA Girl to Grift 'Super Dumb' Men

2026-04-21
Democratic Underground
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of generative AI systems to create realistic but fake content representing a fictional person. The AI system's outputs are used to deceive and financially exploit individuals, which is a direct harm to those individuals (harm to communities and individuals through deception and financial loss). Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm through fraudulent activity and exploitation.
Thumbnail Image

This Scammer Used an AI-Generated MAGA Girl to Grift 'Super Dumb' Men

2026-04-21
Democratic Underground
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Google Gemini) used to create AI-generated images and personas for deceptive purposes, leading to financial harm to victims. The scam exploits AI-generated content to manipulate and defraud individuals, which fits the definition of an AI Incident as the AI system's use directly led to harm to people. The harm is realized (financial loss), and the AI system's role is pivotal in enabling the scam.
Thumbnail Image

Indian Man Dupes Republicans With AI-Generated Influencer 'Emily Hart'; Makes Thousands of Dollars | LatestLY

2026-04-22
LatestLY
Why's our monitor labelling this an incident or hazard?
The event involves the use of generative AI systems to create a digital persona that actively spreads misleading and polarizing political content, which has amassed millions of views and significant follower engagement. This has directly led to harm by manipulating political opinions and monetizing misinformation, impacting communities and political discourse. The AI system's role is pivotal in generating and sustaining the persona and content, and the resulting harm is realized, not merely potential. Therefore, this qualifies as an AI Incident under the framework definitions.
Thumbnail Image

Bikini, beer, big opinions: Viral 'MAGA nurse' turns out to be an Indian med student running an AI money machine

2026-04-22
The Statesman
Why's our monitor labelling this an incident or hazard?
The event involves an AI system used to create a fully fabricated influencer persona that deceived a large audience, leading to financial transactions and engagement under false pretenses. The AI's role was pivotal in generating content and images that sustained the illusion. The harm includes deception, violation of platform rules, and financial harm to users who paid for exclusive content believing it was from a real person. This fits the definition of an AI Incident as the AI system's use directly led to harm to communities (loss of trust, deception) and potentially financial harm. The removal of the account for fraudulent activity confirms the harm was recognized and materialized. Hence, the classification is AI Incident.
Thumbnail Image

'Scammer' AI MAGA Girl Creator Says 'I Was Making Good Money' as Gemini-Powered Influencer Account Is Flagged for Fraudulent Activity

2026-04-22
International Business Times UK
Why's our monitor labelling this an incident or hazard?
The event clearly involves AI systems in the creation and management of the influencer account, including AI-generated visuals and AI-guided content strategy. The account's fraudulent activity and lack of disclosure constitute a violation of platform policies and potentially mislead the public, which can be considered harm to communities and a breach of obligations under applicable law regarding transparency and truthful representation. The removal of the account confirms that harm occurred and was recognized by the platform. Therefore, this qualifies as an AI Incident due to the direct involvement of AI in causing harm through deceptive and fraudulent use on social media.
Thumbnail Image

One Indian medical student made bank by appearing as AI-generated MAGA girl, and 'super dumb' people in the US lapped it up | Attack of the Fanboy

2026-04-22
Attack of the Fanboy
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI-generated images and content to create a fake persona that influenced a large audience with politically charged and misleading posts. The AI system's outputs were central to the incident, as they enabled the creation and dissemination of deceptive content that manipulated social media algorithms and user perceptions. This led to harm in the form of misinformation and social polarization, which are harms to communities. The banning of the account for fraudulent activity further confirms the negative impact. Hence, the event meets the criteria for an AI Incident.
Thumbnail Image

AI-Generated MAGA Influencer 'Emily Hart' Unmasked: Indian Developer Behind Viral Persona - Internewscast Journal

2026-04-22
Internewscast Journal
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly used to generate images and content for a fake influencer persona that amassed a significant following and generated income by exploiting political and social biases. The AI-generated persona misled followers, which constitutes harm to communities through deception and financial exploitation. The removal of the account for fraudulent activity confirms the harm was realized. The AI system's use was central to creating and sustaining this deceptive persona, directly leading to harm. Hence, this event meets the criteria for an AI Incident.
Thumbnail Image

Indian student creates AI influencer 'Emily Hart', earns thousands targeting MAGA audience

2026-04-22
News9live
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly used to create a fictional influencer persona that spread polarizing and misleading political content, which manipulated a community and generated financial gain through deceptive means. The account was banned due to fraudulent activity, indicating harm occurred. The AI system's development and use directly led to harm to communities by spreading misinformation and manipulation, fulfilling the criteria for an AI Incident under the OECD framework.
Thumbnail Image

This Scammer Used an AI-Generated MAGA Girl to Grift 'Super Dumb' Men

2026-04-21
DNYUZ
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to generate fake influencer personas and content that directly led to deceptive financial gain and manipulation of social and political sentiments. The AI-generated content was used to scam people, which is a clear harm to individuals and communities. The use of AI in this context is central to the harm, as the realistic AI-generated images and content enabled the scam to be effective. The incident also highlights issues of misinformation and exploitation of political polarization, which harm communities. Hence, it meets the criteria for an AI Incident due to realized harm caused by the AI system's use.
Thumbnail Image

Top MAGA Influencer Emily Hart Is Not Real? How An Indian Guy Scammed Millions With AI-Generated Lewd Content

2026-04-22
Mashable India
Why's our monitor labelling this an incident or hazard?
The article describes a clear case where an AI system was used to create a fake persona and generate content that deceived and scammed people, causing financial harm. The AI system's involvement is explicit and central to the incident. The harm is realized and significant, involving fraud and exploitation of users, which fits the definition of an AI Incident due to harm to communities and individuals through deception and financial loss.
Thumbnail Image

Who is Emily Hart? The beautiful MAGA influencer created by AI that has Trump fans falling head over heels

2026-04-22
MARCA
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly used to create a fabricated social media influencer persona that engaged and manipulated users, leading to deceptive practices and fraudulent activity. This directly caused harm by misleading users and potentially distorting public discourse, which constitutes harm to communities. The account's removal for fraud confirms that harm materialized. The AI's role was pivotal in generating the persona and content that led to these harms. Hence, this event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Profil gesperrt: Trump-Influencerin legte Millionen Follower rein

2026-04-22
Bild
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system generating fake images and content that directly led to harm by deceiving millions of followers and spreading propaganda. The AI system's use in fabricating a false identity and misleading a large audience constitutes a violation of rights and harm to communities. The harm is realized, not just potential, as the fake profile influenced public discourse and financial transactions. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Indian Student Created MAGA "Influencer" With AI, Made Thousands Of Dollars

2026-04-22
NDTV
Why's our monitor labelling this an incident or hazard?
The AI system was explicitly used to generate the influencer's images and content, which directly led to harm by deceiving users and generating fraudulent income. The event caused realized harm through manipulation and fraud, meeting the criteria for an AI Incident. The removal of the accounts after exposure further confirms the harm and misuse of AI. Although the harm is primarily financial and reputational, it also involves violation of platform rules and potential societal harm through misinformation and manipulation of political opinions.
Thumbnail Image

AI MAGA influencer 'Emily Hart' earns thousands of dollars for Indian student: 'In India...can't make this amount of money'

2026-04-22
The Indian Express
Why's our monitor labelling this an incident or hazard?
The AI system was involved in generating a fake influencer persona and content, which was used to earn money. However, the article does not report any harm to persons, communities, property, or rights resulting from this use. The removal of the account for fraudulent activity is a platform enforcement action rather than a direct harm caused by the AI system. There is no credible indication that this event could plausibly lead to significant harm in the future beyond typical concerns about misinformation or fraud, which are not explicitly detailed here. Thus, the event is best classified as Complementary Information, providing insight into AI-generated personas and platform responses without constituting a new AI Incident or Hazard.
Thumbnail Image

Indian medical student uses Gemini to create AI MAGA influencer, loots Americans of lakhs- Moneycontrol.com

2026-04-22
MoneyControl
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system (Google's Gemini) used to generate a fake influencer persona and content. The use of AI directly led to financial harm to followers who were deceived into paying for content from a non-existent person. This constitutes an AI Incident because the AI system's use directly caused harm to people (financial loss) and contributed to political manipulation, which can be considered harm to communities. The event is not merely a potential risk but a realized harm, thus qualifying as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Indian medical student uses AI woman, earns thousands of dollars from 'dumb Americans'

2026-04-22
India Today
Why's our monitor labelling this an incident or hazard?
The event describes a medical student using AI-generated images and content to create a fake influencer persona that deceived and monetized an audience, including selling explicit AI-generated images. The AI system was central to creating the deceptive persona and content, directly leading to harm through deception and exploitation. This fits the definition of an AI Incident as the AI system's use directly led to harm to communities through misinformation and potential exploitation.
Thumbnail Image

How Indian Medic Profited From Fake AI Influencer Emily Hart

2026-04-22
Rediff.com India Ltd.
Why's our monitor labelling this an incident or hazard?
The event involves an AI system used to create a fake influencer persona that amassed a large following and generated income through deceptive means. The AI's role was pivotal in creating and maintaining the illusion, leading to harm in the form of deception and manipulation of online communities. This fits the definition of an AI Incident because the AI system's use directly led to harm to communities through misinformation and deception. Although the harm is non-physical, it is significant and clearly articulated. Therefore, the classification is AI Incident.
Thumbnail Image

Student who created AI influencer says 'dumb' MAGA crowd was easy to fool

2026-04-22
The Independent
Why's our monitor labelling this an incident or hazard?
The event describes the creation and use of an AI system to generate a fake social media influencer who posted misleading political content, which was accepted and believed by a targeted audience. This caused harm to the community by spreading misinformation and manipulating political views. The AI system's involvement in generating and disseminating this content directly led to these harms. The takedown of the accounts for fraudulent activity further supports the presence of harm. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

AI influencer mints money with pro-Trump posts: Check how a broke student created Emily Hart to fool 'dumb' MAGA crowd | Today News

2026-04-22
mint
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly used to generate a fake social media influencer persona and content that deceived and manipulated a large audience, leading to financial gain and spreading misleading political content. The harm includes misinformation, manipulation of political discourse, and exploitation of users, which are harms to communities and potentially violations of rights related to truthful information. The AI system's use directly led to these harms, fulfilling the criteria for an AI Incident rather than a hazard or complementary information. The event is not merely a product launch or general AI news but describes a concrete case of harm caused by AI use.
Thumbnail Image

KI-Influencerin Emily Hart: Indischer Student verdiente Tausende Dollar mit Fake-Profil

2026-04-22
20 Minuten
Why's our monitor labelling this an incident or hazard?
The event involves the use of generative AI tools (e.g., Google's Gemini, Grok by X) to create and maintain a fake influencer persona that deceived and financially exploited people. The harm is realized as the followers were misled and paid for content under false pretenses, which is a form of harm to communities and a violation of trust and potentially consumer rights. The AI system's use directly led to this harm through the creation and dissemination of fake content. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Indian Man Used AI-Generated MAGA Influencer to Scam Men on Social Media

2026-04-22
Breitbart
Why's our monitor labelling this an incident or hazard?
The event describes a man using AI-generated images and AI chatbot advice to create a fake influencer persona that scammed money from social media users. The AI system's outputs were directly used to deceive and financially harm people, fulfilling the criteria for an AI Incident. The harm is realized (scamming money), and the AI system's role is pivotal in enabling the scam. The event is not merely a potential risk or a complementary update but a concrete case of AI misuse causing harm.
Thumbnail Image

'Emily Hart' wasn't real: Broke South Asian student cashed in on 'dumb' MAGA supporters to fund his US entry

2026-04-22
Economic Times
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI systems to create a fake persona that generated millions of views and led to merchandise sales and paid subscriptions by deceiving a political audience. The harm is realized as the audience was misled and financially exploited, which constitutes harm to communities and a breach of trust. The fraudulent nature of the account and its removal further confirm the harm. Hence, this is an AI Incident as the AI system's use directly led to harm through misinformation and financial exploitation.
Thumbnail Image

Indian student behind fake 'Emily Hart' that scammed MAGA wants to study in US

2026-04-22
The Times of India
Why's our monitor labelling this an incident or hazard?
The event describes an AI system (Gemini) used to generate content and persona that was deliberately used to scam people, causing financial harm. The AI system's outputs were central to the scam, and the harm (financial loss and deception) has already occurred. This fits the definition of an AI Incident because the AI system's use directly led to harm to people (financial and reputational).
Thumbnail Image

This MAGA influencer turned out to be AI; Hers is not the only fake account - CNBC TV18

2026-04-22
cnbctv18.com
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems used to create and operate fake social media influencer accounts, which fits the definition of AI systems. The use of these AI-generated personas to spread politically charged content and monetize their popularity suggests potential indirect harm to communities through misinformation or manipulation. However, the article does not report any direct or realized harm, legal violations, or disruptions caused by these AI influencers. Instead, it highlights the exposure of one such AI influencer and the broader trend of AI-generated personas gaining popularity. This focus on reporting and contextualizing the phenomenon, rather than documenting a specific harmful event or imminent risk, aligns with the definition of Complementary Information. Hence, the event is not an AI Incident or AI Hazard but provides valuable insight into the evolving AI ecosystem and its societal impacts.
Thumbnail Image

MAGA Influencer Emily Hart Revealed to Be AI Created by 22-Year-Old Student

2026-04-22
Us Weekly
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used to generate realistic images and persona content that deceived social media users, leading to harm through misinformation and manipulation of a community (MAGA followers). The AI system's outputs were central to the incident, as the fake persona was AI-created and used to exploit users financially and socially. The fraudulent activity and subsequent ban by Meta further confirm the harm caused. This fits the definition of an AI Incident because the AI system's use directly led to harm to communities and violations of trust, fulfilling criteria (d) and (c) under the AI Incident definition.
Thumbnail Image

Indian Scammer Might Be Behind Your Favorite Hot MAGA Influencer

2026-04-22
The Daily Wire
Why's our monitor labelling this an incident or hazard?
The event describes a scam involving AI-generated personas and content that directly caused financial harm to people targeted by the scam. The AI system was used in the development and use of the scam accounts, which led to realized harm (financial loss) to individuals. This fits the definition of an AI Incident because the AI system's use directly led to harm to groups of people (financial harm to victims).
Thumbnail Image

MAGA-Influencerin ist in Wahrheit Student aus Indien | Heute.at

2026-04-22
Heute.at
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to create a fictitious influencer persona that deceived a large audience, leading to financial gain for the creator and potential financial and informational harm to the deceived users. The AI-generated content was used to manipulate and exploit users, which fits the definition of an AI Incident due to realized harm (fraud, deception, and financial exploitation).
Thumbnail Image

Indian Man Made Enough Money for Med School After Tricking 'Dumb' Republicans Into Thinking He Was a Blonde MAGA Influencer

2026-04-22
Mediaite
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated images and AI assistance in content strategy, which were used to create a fake influencer persona that deceived a large audience. The deception led to financial gain by exploiting followers, which constitutes harm to communities through misinformation and manipulation. The AI system's use directly led to this harm, fulfilling the criteria for an AI Incident. Although the harm is non-physical, it affects social trust and political discourse, which are recognized forms of harm under the framework.
Thumbnail Image

Indian student behind AI MAGA influencer: The blonde Trump supporter who never existed - The Tribune

2026-04-22
The Tribune
Why's our monitor labelling this an incident or hazard?
The event describes the creation and use of an AI-generated persona that actively misled social media users by presenting a false political influencer identity. This deception caused harm to communities by spreading misinformation and manipulating political opinions, fulfilling the criteria for harm to communities under AI Incident definition (d). The AI systems were central to generating the persona and content, and their use directly led to the harm. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Indischer Student legt Tausende Männer mit KI-Model rein

2026-04-23
Nau
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system to generate a fake influencer persona that successfully deceived thousands of people, causing harm through misinformation and manipulation. The harm is realized, as people were misled and money was made exploiting this deception. This fits the definition of an AI Incident because the AI system's use directly led to harm to communities (deception and misinformation) and potentially breaches ethical and legal norms. The event is not merely a potential hazard or complementary information, but a concrete incident of harm caused by AI use.
Thumbnail Image

Indian medical student cracks MAGA 'cheat code' to mint thousands of dollars

2026-04-22
The Siasat Daily
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (Google Gemini's Nano Banana Pro and Elon Musk's AI chatbot Grok) used to generate a fake influencer persona that amassed a large following and monetized content based on political manipulation and deception. The harm includes violation of platform rules, deception of users, and potential social harm through spreading divisive political content. The fraudulent activity led to account removals, indicating recognized harm. Thus, the AI system's use directly led to harm, meeting the criteria for an AI Incident.
Thumbnail Image

Indian medial student creates a young, attractive "MAGA influencer" model using AI, makes huge money by selling her photos to MAGA supporters

2026-04-22
OpIndia
Why's our monitor labelling this an incident or hazard?
The event describes the creation and use of AI-generated images and persona to deceive and manipulate a political audience, leading to financial gain and spreading misleading content. The AI system's use directly caused harm by enabling fraudulent activity and misinformation, impacting the community targeted. The account's ban for fraudulent activity confirms the harm was realized. Hence, this is an AI Incident due to realized harm caused by AI-generated deceptive content.
Thumbnail Image

AI doing the jobs humans can't or won't do.

2026-04-22
Democratic Underground
Why's our monitor labelling this an incident or hazard?
An AI system (Gemini chatbot) was used to create a fake persona and generate content designed to exploit and scam a vulnerable group of people. The scam has already occurred, causing direct harm to individuals financially and through deception. Therefore, this qualifies as an AI Incident because the AI system's use directly led to harm to people.
Thumbnail Image

Hot influencer unmasked as Indian med student, and he's earned thousands tricking 'dumb' men

2026-04-22
The Tab
Why's our monitor labelling this an incident or hazard?
An AI system was used to create realistic images of a fake influencer, which directly led to harm by tricking people into spending money on a non-existent person. This constitutes harm to individuals through deception and financial exploitation, fitting the definition of an AI Incident. The AI system's use in generating the fake persona was pivotal to the incident, as it enabled the creation of a convincing but false identity that caused harm.
Thumbnail Image

Top MAGA Influencer's True Identity Exposed After Racking Up Millions of Followers with Patriotic Bikini Photos and Gun-Toting Content

2026-04-22
RadarOnline
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system to generate fake influencer images and deceive an audience, which is a misuse of AI technology. While this deception can be considered a harm to communities or a violation of rights (e.g., misleading people), the article does not provide evidence that this has led to direct or significant harm such as injury, disruption, or legal violations. Therefore, it does not meet the threshold for an AI Incident. It also does not describe a plausible future harm scenario beyond the current misuse, so it is not an AI Hazard. The main focus is on revealing the deceptive use of AI-generated content, which is a specific event involving AI misuse and harm, thus it qualifies as an AI Incident due to the realized harm of deception and manipulation of a community.
Thumbnail Image

Indian Medical Student Uses AI Influencer To Earn Thousands From US Audience, Account Later Banned

2026-04-22
The Hans India
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly involved in generating the influencer persona and content. The use of AI-generated synthetic media to deceive or simulate identity and monetize it raises issues related to misinformation, digital ethics, and potentially violations of platform policies. The account's monetization and subsequent banning indicate that harm (economic and ethical) occurred due to the AI system's use. Therefore, this qualifies as an AI Incident because the AI system's use directly led to realized harm in terms of ethical breaches, misinformation risks, and platform disruption.
Thumbnail Image

Female MAGA Influencer Revealed To Be Male Med Student Using AI: 'I Haven't Seen An Easier Way To Make Money Online'

2026-04-22
BroBible
Why's our monitor labelling this an incident or hazard?
The event describes a male medical student using AI systems to generate a fake female MAGA influencer persona, which attracted millions of views and followers, and generated income by deceiving people. The AI system's outputs were central to the deception and financial exploitation, constituting harm to communities through misinformation and manipulation. Therefore, this qualifies as an AI Incident due to realized harm caused by the AI system's use.
Thumbnail Image

Fake Maga-Influencerin: Indischer Student ergaunert mit KI-Schönheit Emily Hart Tausende Dollar

2026-04-22
Basler Zeitung
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (Google Gemini, Nano Banana Pro) to generate realistic fake images and content for a fraudulent scheme that caused financial harm to victims. The AI system's use was central to the deception and monetization strategy, directly leading to harm. The account was eventually suspended due to fraud, confirming the realized harm. Hence, this is an AI Incident due to direct harm caused by AI-enabled deception and fraud.
Thumbnail Image

Emily Hart: How a Fake AI Influencer Scammed US Conservatives for Tuition Money | LatestLY

2026-04-22
LatestLY
Why's our monitor labelling this an incident or hazard?
The event involves the use of generative AI systems to create a fake influencer persona that misled and monetized conservative American users, causing financial harm. This constitutes a violation of trust and potentially consumer protection laws, fitting the definition of an AI Incident due to direct harm to people (financial harm) and harm to communities (manipulation of political discourse). The AI system's use was central to the harm, as it generated the persona and content that deceived users. Therefore, this is classified as an AI Incident.
Thumbnail Image

Indian Student Behind Top MAGA Influencer Emily Hart Admits She's AI And Has Made Thousands From Online Followers

2026-04-22
International Business Times UK
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to create and operate fake social media personas that actively mislead and manipulate a political audience, causing harm to communities through misinformation and deception. The AI system's use directly led to the harm by enabling the creation of a convincing but false persona that exploited users for financial gain and spread polarizing content. The fraudulent nature and the eventual banning of the account confirm the harm occurred. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

MAGA 'hot girl' attracting fans with bikinis and guns turns out to be something totally unexpected

2026-04-22
Conservative News Today
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system (Google Gemini) to generate content that directly led to harm by deceiving and manipulating a community of users, resulting in financial exploitation and misinformation. The AI system's use in creating a fake persona that attracted and scammed followers fits the definition of an AI Incident, as it caused harm to communities and violated ethical standards. Although the harm is non-physical, it is significant and clearly articulated, involving deception and exploitation facilitated by AI. Hence, the classification as AI Incident is appropriate.
Thumbnail Image

Medical student from India behind AI conservative influencer says 'super dumb' MAGA crowd was easy to fool

2026-04-23
Yahoo
Why's our monitor labelling this an incident or hazard?
An AI system (Google's Gemini AI) was used to generate the influencer's images and content, which were then deployed to mislead and manipulate a political audience. The AI-generated persona spread false or misleading political messages, which is a form of harm to communities. The fraudulent activity led to the removal of the accounts, confirming the harm occurred. Therefore, this qualifies as an AI Incident due to the direct use of AI in generating deceptive content that caused harm to a community through misinformation and manipulation.
Thumbnail Image

Conservative Influencer With Millions of Followers Exposed as AI Bot Created by 22-Year-Old From India

2026-04-22
Yahoo
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to create and operate a fake social media influencer account that deceived millions of followers with fabricated content and persona. This deception can be considered harm to communities by spreading misinformation and manipulating public discourse. The fraudulent nature of the account and its eventual banning confirm that harm occurred. The AI system's role was pivotal in generating the content and persona, directly leading to the harm. Hence, this is classified as an AI Incident.
Thumbnail Image

Top MAGA influencer's secret identity revealed - The Horn News

2026-04-22
The Horn News
Why's our monitor labelling this an incident or hazard?
The event describes the deliberate use of AI-generated content and personas to scam and deceive a large group of people, causing financial and informational harm. The AI systems were instrumental in creating and maintaining the fake influencer persona and generating content that misled followers. The harm is realized and direct, as people were scammed and manipulated. This fits the definition of an AI Incident because the AI system's use directly led to harm to communities and violation of rights through fraudulent activity.
Thumbnail Image

Emily Hart had millions of MAGA fans drooling over her content. Turns out, she's an AI character made by an Indian scammer.

2026-04-22
We Got This Covered
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly used to create a fake influencer persona that attracted a large following and generated income through deceptive means. The AI-generated content was used to manipulate a political audience, leading to financial harm (scams and exploitation) and misinformation dissemination. This fits the definition of an AI Incident because the AI system's use directly led to harm to communities (manipulation, misinformation) and financial harm to individuals. The fraudulent activity and bans by platforms further confirm the harm caused.
Thumbnail Image

Scammer Says 'Super Dumb People' Drove AI-Generated MAGA Influencer to 1 Million Followers, Thousands in Profit

2026-04-22
International Business Times
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to generate fake influencer personas and content that attracted millions of views and followers, leading to monetization and profit. The AI-generated content deceived a large audience, causing harm to communities through misinformation and manipulation. The involvement of AI in generating and managing the content is explicit, and the harm is realized, not just potential. This fits the definition of an AI Incident because the AI system's use directly led to harm to communities and violations of platform policies, which are breaches of obligations intended to protect users. The event is not merely a product launch or general AI news, nor is it a future risk without realized harm, so it is not an AI Hazard or Complementary Information.
Thumbnail Image

Who Is Emily Hart? Top MAGA Influencer Is AI Created by Orthopedic Surgery Trainee in India Who Funded His Medical School Expenses by Tricking Men

2026-04-22
International Business Times, Singapore Edition
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used to generate a fake influencer persona that deceived and manipulated a large audience, leading to financial harm and misinformation. The AI's role was pivotal in creating believable content and images that attracted and exploited followers. The fraudulent activity led to account removals, confirming realized harm. This fits the definition of an AI Incident as the AI system's use directly led to harm to communities (misinformation, fraud) and violation of trust, fulfilling criteria (c) and (d) in the harm definitions.
Thumbnail Image

Who is Emily Hart? AI-Generated MAGA influencer AI scam by 22- year Indian student exposed

2026-04-22
Mashable ME
Why's our monitor labelling this an incident or hazard?
The event involves the use of generative AI systems to create a fake influencer persona that was used to scam people out of money, which is a clear violation of rights and causes harm to individuals financially and socially. The AI system's development and use directly led to this harm, fulfilling the criteria for an AI Incident. The fraudulent activity and resulting takedown of accounts confirm that harm occurred. Therefore, this is not merely a potential hazard or complementary information but a realized AI Incident.
Thumbnail Image

AI Model & 'MAGA' Influencer Emily Hart Unmasked as Indian Man

2026-04-22
Mandatory
Why's our monitor labelling this an incident or hazard?
The AI system (Google Gemini) was explicitly used to create and manage a fake influencer persona that amassed millions of views and generated income through subscriptions and merchandise sales. The content was designed to appeal to a specific ideological audience, potentially influencing opinions based on false representation. This constitutes harm to communities by spreading misinformation and deceptive content, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, as the influencer persona was active and influential.
Thumbnail Image

AI 'MAGA Girls' Grift Thousands from Loyal Fans: Inside the Scam Preying on Conservative Men

2026-04-22
WebProNews
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems generating images and personas used to deceive and scam people, causing direct financial harm to victims. The AI's role is pivotal in creating believable fake influencers that attract and manipulate fans, leading to realized harm. This fits the definition of an AI Incident because the AI system's use has directly led to harm (financial fraud and exploitation) and violations of trust, which can be considered harm to communities and individuals. The article details ongoing harm rather than just potential or future risk, so it is not an AI Hazard or Complementary Information. It is not unrelated because AI systems are central to the incident.
Thumbnail Image

Top MAGA influencer exposed as AI creation from India

2026-04-22
Joe Banks
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly used to create a fake influencer persona that spread misleading political content and generated significant engagement and revenue. The harm includes deception of a community (harm to communities), fraudulent activity, and violation of platform rules. The AI system's use directly caused these harms, meeting the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Emily Hart AI influencer 'it girl' catering to MAGA crowd makes scammer rich

2026-04-22
Scallywag and Vagabond
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to create and disseminate fabricated influencer content that directly misled and exploited a community, causing harm through deception and fraudulent monetization. The AI system's outputs were central to the harm, fulfilling the criteria for an AI Incident. The harm is realized, not merely potential, as the AI-generated persona gained followers, influenced opinions, and generated income through deceptive means. The fraudulent nature and the impact on the community justify classification as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

How an AI-generated 'conservative MAGA influencer' went viral and made thousands

2026-04-22
The American Bazaar
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Google's Gemini) used to generate content and guide strategy for a fake influencer account. The account spread politically charged, misleading content under a false identity, which gained significant engagement and monetization. This activity constitutes harm to communities by spreading misinformation and manipulating public opinion, fulfilling the criteria for harm under AI Incident definition (harm to communities). The AI system's development and use directly led to this harm, and the account was eventually removed for fraudulent activity, confirming the realized harm. Thus, the event is best classified as an AI Incident.
Thumbnail Image

MAGA's bikini queen exposed as an AI fraud milking lonely men

2026-04-22
Mail Online
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly used to create fake digital influencers that mislead and exploit a targeted audience, causing harm through deception and fraud. The AI's role is pivotal in generating the personas and content that led to the harm. This fits the definition of an AI Incident because the AI system's use directly led to harm (fraud, deception, exploitation) to a group of people (the followers).
Thumbnail Image

Aspiring Indian doctor scams MAGA fans with 'hot' model influencer

2026-04-22
DT Next
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly used to generate a fake influencer persona that deceived and scammed people, causing financial harm. This fits the definition of an AI Incident because the AI system's use directly led to harm (financial loss) to a group of people. The harm is realized, not just potential, and the AI system's role is pivotal in enabling the scam. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Emily Hart and the AI Grift That Turned Loneliness Into Cash

2026-04-22
El-Balad.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system used to create a synthetic persona that deceives a community for financial gain. The harm is realized as followers are misled by the AI-generated identity, which manipulates political and social trust, causing harm to communities and potentially violating ethical norms. The AI system's use directly led to this harm, fulfilling the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Indian Student Used AI To Create Fake MAGA Influencer 'Emily Hart', Earned Thousands Before Account Takedown

2026-04-22
thedailyjagran.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of generative AI systems to create a fake influencer persona and content that actively misled and influenced a community, causing harm through misinformation and deception. The AI system's outputs directly led to the spread of false narratives and manipulation of audiences, which is a harm to communities. The takedown of the account confirms the recognition of harm. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Emily Hart: la influencer MAGA que facturó miles de dólares con fanáticos pero que resultó ser falsa

2026-04-22
BioBioChile
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly used to generate the influencer's images and exclusive content, which was then used to deceive thousands of followers and generate income. The harm includes fraudulent activity (financial harm to subscribers) and the spread of misleading political content, which can harm communities by influencing opinions based on falsehoods. The account was eventually removed for fraudulent activity, confirming the harm occurred. Thus, the event meets the criteria for an AI Incident due to direct harm caused by the AI system's use.
Thumbnail Image

Quién es Emily Hart, la supuesta influencer MAGA creada con IA para estafar a hombres en línea - La Tercera

2026-04-23
LA TERCERA
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI generative platform to create a fake influencer persona that deceived people and generated income by exploiting their trust. The harm is realized as men were scammed financially by subscribing or paying for content under false pretenses. The AI system's use directly led to this harm, fulfilling the criteria for an AI Incident. The article details the development and use of the AI system in a way that caused actual harm, not just potential harm or general information.
Thumbnail Image

Influencer MAGA creada con IA se vuelve viral y genera miles de dólares a costa de hombres solitarios en internet

2026-04-22
Univision
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly used to generate images and content strategy for a fictitious influencer persona. The use of AI directly led to the creation and dissemination of misleading content that deceives users, which can be considered harm to communities and a violation of rights (e.g., right to truthful information). The harm is realized as the AI-generated persona is actively influencing and monetizing a vulnerable audience under false pretenses. Therefore, this qualifies as an AI Incident due to the direct role of AI in causing harm through deceptive and manipulative content.
Thumbnail Image

This Scammer Used an AI-Generated MAGA Girl to Grift 'Super Dumb' Men

2026-04-23
Democratic Underground
Why's our monitor labelling this an incident or hazard?
The article describes how an AI system was used to generate a fake persona and content tailored to a political niche to scam people financially. The AI system's involvement in creating and optimizing the deceptive profile directly contributed to the harm (financial fraud and manipulation). This fits the definition of an AI Incident as the AI system's use has directly led to harm to people through deception and financial exploitation.
Thumbnail Image

The Now-Banned Viral MAGA Influencer Is, Well, AI

2026-04-23
The Daily Dot
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system generating a fake influencer persona that was used to scam people, causing direct financial harm. The AI's role in creating the avatar and guiding the choice of target audience was pivotal to the scam's success. This fits the definition of an AI Incident because the AI system's use directly led to harm (financial loss) to a group of people (MAGA supporters).
Thumbnail Image

El caso de Emily Hart: la influencer de contenido MAGA creada con IA que estafó a miles de personas

2026-04-22
Sopitas.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to create a fake influencer persona that was used to scam thousands of followers, causing direct financial harm. The AI system's use in generating deceptive content that led to monetization and fraud fits the definition of an AI Incident, as it directly led to harm to people (financial harm and deception). The presence of AI is explicit, and the harm is realized, not just potential. Hence, the classification is AI Incident.
Thumbnail Image

MAGA Influencer Emily Hart Exposed as Indian Man

2026-04-23
Gadget Review
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems in the creation and dissemination of misleading and politically charged content that manipulates social media algorithms to influence public opinion and generate income. This constitutes harm to communities through the spread of misinformation and political manipulation. The AI's role was pivotal in generating content and advising on targeting strategies that exploited social divisions. Since the harm is realized and the AI system's use directly contributed to it, this qualifies as an AI Incident.
Thumbnail Image

Creator of AI Influencer Claims "Super Dumb" MAGA Followers Was "Easy to Fool" - Inquisitr News

2026-04-23
The Inquisitr
Why's our monitor labelling this an incident or hazard?
The event involves the use of generative AI systems to create a fake online persona that actively misled and manipulated a political audience, causing harm to communities through misinformation and exploitation. The AI system's outputs were central to the incident, and the harm was realized (followers were deceived, money was made from them, and platforms had to intervene). This fits the definition of an AI Incident as the AI system's use directly led to harm to communities and a breach of trust, fulfilling criteria (d) and (c) under the AI Incident definition.
Thumbnail Image

Emily Hart as the MAGA AI Grift Turns Into a Warning Sign

2026-04-23
El-Balad.com
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly involved in the development and use of the fabricated persona, guiding content creation to target a political audience. The resulting harm includes deception of followers, erosion of trust, and manipulation of political identity, which are harms to communities. Since the harm is occurring through the use of the AI system, this qualifies as an AI Incident. The event is not merely a potential risk or a general discussion but describes realized harm caused by the AI-enabled persona.
Thumbnail Image

Infuriating MAGA Influencer Scam That Wrecked Far Too Many

2026-04-23
Resist the Mainstream
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to create and operate fake social media influencers who spread misleading content and generate income through deceptive means. This directly led to harm by deceiving and manipulating social media users, violating trust and potentially impacting social and political discourse. The financial exploitation and ideological manipulation constitute harm to communities and individuals. Therefore, this qualifies as an AI Incident because the AI system's use directly led to realized harm through misinformation, fraud, and manipulation.
Thumbnail Image

Indian student earns thousands by using AI to create pro-MAGA influencer

2026-04-25
Cybernews
Why's our monitor labelling this an incident or hazard?
The AI system (Gemini) was used to create a fake influencer persona that actively spread political content, gaining millions of views and financial benefit. This use of AI directly contributed to misleading online influence and potential misinformation, which is a harm to communities. The suspension of the accounts following the investigation indicates recognition of harm or violation. Although no physical injury or legal rights violation is explicitly mentioned, the misleading political influence and financial exploitation constitute significant harm. Thus, the event meets the criteria for an AI Incident due to the AI's direct role in causing harm through misinformation and manipulation.
Thumbnail Image

评论 16

2026-04-23
guancha.cn
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system to create a fake persona that directly misled and manipulated a large group of people, causing them to spend money and believe in a false identity. This deception harms the community by spreading misinformation and exploiting users financially. The AI system's development and use were pivotal in enabling this harm. Hence, it meets the criteria for an AI Incident as the AI system's use directly led to harm to communities and financial exploitation.
Thumbnail Image

奇客Solidot | 印度男子用 AI 生成的 MAGA 女孩骗美国保守派男性

2026-04-22
Lighthouse @ Newquay
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly used to create a fake social media persona that deceived users, leading to financial harm (monetary loss from subscriptions and tips) and manipulation of political beliefs, which can be considered harm to communities and violation of trust. The fraudulent use of AI-generated content to exploit users fits the definition of an AI Incident because the AI system's use directly led to realized harm through deception and financial exploitation.
Thumbnail Image

印度男子打造AI女网红狂骗美国寂寞男:吸引超百万粉丝

2026-04-23
驱动之家
Why's our monitor labelling this an incident or hazard?
The event explicitly describes the use of AI technologies (Google Gemini, Grok, and other AI tools) to generate a virtual persona and content that deceived a large audience. The harm includes emotional and political manipulation, which constitutes harm to communities and a violation of rights related to truthful information and political expression. Since the AI system's use directly caused these harms, this qualifies as an AI Incident.
Thumbnail Image

印度医学生靠AI生成的MAGA网红月入数千美元

2026-04-23
ai.zhiding.cn
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (Google Gemini) to create and disseminate AI-generated content that spreads misleading and politically polarizing messages. The AI-generated virtual influencer accounts have caused real harm by manipulating social media algorithms to amplify divisive content, misleading users, and contributing to political polarization and misinformation. These effects constitute harm to communities, fulfilling the criteria for an AI Incident. The AI system's role is pivotal in generating the deceptive content and enabling the rapid spread of harmful narratives, directly linking the AI system's use to the realized harm.
Thumbnail Image

印度医学生用AI打造MAGA女神月入数千美元,旧金山AI店长频出骚操作

2026-04-23
t.cj.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The Indian student's use of AI to create a deceptive persona that exploits and manipulates a specific audience for financial gain constitutes an AI Incident due to harm to communities through misinformation and exploitation. The AI store manager's mismanagement causing operational disruption also qualifies as an AI Incident due to disruption of business operations. Both involve AI system use leading directly or indirectly to harm, meeting the criteria for AI Incidents rather than hazards or complementary information.
Thumbnail Image

"Richtig dumme Menschen": Student erfindet MAGA-Krankenschwester und zockt Trump-Fans ab

2026-04-23
T-online.de
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly used to create a fake persona that deceived and financially exploited people, causing harm through fraudulent activity. The harm is realized as the Trump supporters were tricked and paid money based on the AI-generated false identity. This fits the definition of an AI Incident because the AI system's use directly led to harm to people and communities through deception and financial exploitation.
Thumbnail Image

KI-Fake "Emily" machte angeblich MAGA-Männer verrückt

2026-04-23
Frankfurter Allgemeine
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly used to create a fake influencer persona that spread politically charged and provocative content, exploiting social and political biases. The AI-generated deepfake images and persona were used to deceive and manipulate a target audience, which could plausibly lead to harm such as misinformation, social division, or political manipulation. Although the article discusses the exposure of the fake and the potential harms, it does not document actual realized harm or injury. Hence, the event fits the definition of an AI Hazard, as it plausibly could lead to an AI Incident but does not confirm that harm has occurred yet.
Thumbnail Image

KI-Influencerin enttarnt: Hinter MAGA-Blondine steckt ein Medizinstudent aus Indien - extrem dumm

2026-04-23
watson.de/
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system to create a fake influencer persona that deceived and manipulated a large online audience, spreading politically charged and provocative content. This manipulation of public opinion and dissemination of false identity constitutes harm to communities, fulfilling the criteria for an AI Incident. The AI system's development and use directly led to this harm, and the account's removal for fraudulent activities confirms the realized harm. Hence, the classification as AI Incident is appropriate.
Thumbnail Image

Medizinstudent aus Indien erschafft KI-generierte MAGA-Influencerin: So finanziert er sein Studium

2026-04-23
Hellweger Anzeiger
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly used to generate the fake influencer's images and to advise on content strategy, which led to the creation of a deceptive social media persona with millions of followers. The AI's involvement directly enabled the spread of misleading political content and the monetization of this deception, causing harm to the community by manipulating and exploiting followers. The harm is realized, not just potential, as money was earned through deceptive means and misinformation was actively spread. Hence, this is an AI Incident.
Thumbnail Image

Indischer Student täuscht Trump-Anhänger mit KI-Krankenschwester

2026-04-23
IT BOLTWISE® x Artificial Intelligence
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Google's Gemini chatbot and Nano Banana Pro for image generation) used to create a fake persona that manipulated social media users, specifically Trump supporters, to generate income through subscriptions and merchandise sales. This manipulation constitutes harm to communities and financial harm, fulfilling the criteria for an AI Incident. The AI system's development and use directly led to realized harm through deception and exploitation. The account's eventual suspension confirms the harm was materialized and recognized.
Thumbnail Image

KI-Trick: Wie ein Student Millionen an MAGA-Fans verdiente

2026-04-24
GIGA
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (Google Gemini) to create a fake influencer persona that deceived and financially exploited a large audience. The harm is realized as followers paid money based on false representation, which is a direct consequence of the AI-generated content and persona. This meets the criteria for an AI Incident because the AI system's use directly led to harm to people (financial exploitation) and communities (manipulation of political discourse).
Thumbnail Image

火辣身材爆紅!挺川普正妹護士竟是AI假人...原創者:MAGA支持者超蠢 | 聯合新聞網

2026-04-24
UDN
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly used to create a fake influencer persona that deceived millions of followers, leading to financial fraud and manipulation of political opinions. The AI-generated content was central to the incident, and the harm includes deception, financial exploitation, and misinformation targeting a specific community. This fits the definition of an AI Incident because the AI system's use directly led to harm to communities and individuals through fraud and misinformation.
Thumbnail Image

辣妹護理師引網友「狂抖內」 幕後黑手賺飽飽!笑曝真相

2026-04-23
中時新聞網
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly used to create and operate a virtual influencer persona that deceived users into financially supporting a fake character, constituting fraud. The harm is financial and reputational, affecting the online community and individual supporters. The AI system's use directly led to this harm, fulfilling the criteria for an AI Incident under violations of law (fraud) and harm to communities (deception and financial loss).
Thumbnail Image

親川普性感網美狂吸金 真面目曝光!境外幕後黑手狂笑粉絲超蠢|壹蘋新聞網

2026-04-22
壹蘋新聞網
Why's our monitor labelling this an incident or hazard?
The event describes an AI system being used to create a fake online persona that deceived people into spending money, constituting financial harm. The AI-generated persona was used deliberately to scam users, which is a violation of rights and causes harm to individuals (financial harm). The involvement of AI in generating the persona and content is explicit, and the harm (scam/fraud) has occurred, meeting the criteria for an AI Incident.
Thumbnail Image

AI網紅造假?過百萬粉絲KOL「Emily Hart」竟是AI生成!男子靠虛擬女神賺過百萬惹熱議

2026-04-24
ezone.hk 即時科技生活
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system generating a virtual influencer, which is used to deceive a large audience. This deception could plausibly lead to harms such as misinformation, manipulation, or violation of rights, especially given the political content shared by the AI persona. However, the article does not provide evidence that these harms have materialized yet or that legal rights have been violated. Therefore, it does not meet the threshold for an AI Incident. The event is not merely general AI news or a product launch, so it is not Unrelated. It is also not a Complementary Information piece since it does not provide updates or responses to a prior incident. Hence, the classification as an AI Hazard is appropriate, as the AI-generated influencer's existence and use could plausibly lead to significant harms.
Thumbnail Image

超兇護理師竟是假的!醫學生創「川粉女神」狂曬裸照...靠訂閱爽賺第一桶金 - 民視新聞網

2026-04-25
民視新聞網
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to generate a fake social media persona and explicit images, which were used to deceive and manipulate a specific audience for financial gain. This deception constitutes harm to communities through misinformation and manipulation, and potentially violates platform rules and user rights. The AI system's development and use directly led to these harms, qualifying this as an AI Incident rather than a hazard or complementary information. The harm is realized, not just potential, as the fake persona gained millions of views and subscribers, causing actual deception and exploitation.
Thumbnail Image

비키니 입고 트럼프 지지하던 '금발 미녀'...뜻밖의 정체 '충격'

2026-04-26
First-Class 경제신문 파이낸셜뉴스
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly used to create and operate a fake influencer persona that spread political content and generated significant social media engagement. The use of AI to create deceptive content and influence public opinion constitutes a violation of trust and potentially harms communities by spreading misinformation or manipulating political views. Although the article does not explicitly mention direct physical harm or legal violations, the deceptive use of AI-generated personas to manipulate political discourse and social media followers is a significant harm to communities and public trust. Therefore, this event qualifies as an AI Incident due to the realized harm caused by the AI system's use in deceptive political influence and misinformation.
Thumbnail Image

"상상도 못한 정체"...'MAGA' 마음 저격한 금발 미녀 진실은

2026-04-27
Wow TV
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system to create a virtual influencer persona that actively posted content influencing a political community, generating significant engagement and monetary gain. This use of AI directly led to deceptive practices and misinformation, which constitutes harm to communities and a violation of trust. The removal of the account for fraudulent activity confirms the harm realized. Therefore, this qualifies as an AI Incident due to the AI system's role in creating and sustaining a misleading persona that caused harm through misinformation and manipulation.
Thumbnail Image

"낙태 반대· 이민자 추방" 금발 미녀의 배신...진짜 정체는 학비 벌려는 인도인

2026-04-26
연합뉴스TV
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly mentioned (Google's Gemini) used to create a fake influencer persona that spread misleading political content and generated revenue by deceiving followers. The harm includes misinformation, manipulation, and financial deception of a community, which fits the definition of harm to communities under AI Incident criteria. The AI system's use directly led to these harms, fulfilling the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

"하루 한 시간으로 수천 달러 벌어" 트럼프 지지 미녀, AI였다

2026-04-26
아시아경제
Why's our monitor labelling this an incident or hazard?
An AI system (Google's Gemini) was explicitly used to create and manage a fake influencer persona that deceived millions, spreading political messages and generating income through manipulation. This led to realized harm by misleading a large community and engaging in fraudulent activity, which is a violation of trust and can be considered harm to communities. The incident involves the use of AI in a way that directly led to these harms, fulfilling the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

'트럼프 지지' 미녀 인플루언서, 정체 알고보니 '경악'

2026-04-27
데일리안
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly used to create a virtual influencer who engaged in political content dissemination and monetization, misleading followers about the influencer's real identity. The AI's role was pivotal in generating and maintaining the persona, which directly led to deceptive and potentially harmful outcomes, including misinformation and financial exploitation. The deletion of the Instagram account for 'fraudulent activity' further supports the presence of harm. Hence, this is an AI Incident due to realized harm caused by the AI system's use.