AI-Generated Instagram Model Deceives Millionaires and Athletes

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

An AI-generated Instagram influencer, 'Emily Pellegrini,' amassed over 140,000 followers and even secured date invitations from wealthy businessmen and sports stars who believed she was real. Her anonymous creator used ChatGPT to craft her ideal appearance and earned significant income on Fanvue. The incident underscores dangers of AI-driven virtual personas deceiving users.[AI generated]

Why's our monitor labelling this an incident or hazard?

An AI system was explicitly used to create a virtual model that interacts on social media, leading to real people being misled and engaging with a non-existent person. This deception can cause harm to individuals' emotional well-being and trust, which qualifies as harm to communities or individuals. Since the AI system's use directly led to this harm, this event qualifies as an AI Incident under the framework.[AI generated]
AI principles
Transparency & explainabilityAccountabilityHuman wellbeingSafetyDemocracy & human autonomy

Industries
Media, social platforms, and marketingConsumer servicesDigital security

Affected stakeholders
Consumers

Harm types
PsychologicalEconomic/PropertyReputational

Severity
AI incident

Business function:
Marketing and advertisementSales

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Insólito: creó una modelo con inteligencia artificial y estrellas del deporte la invitaron a salir creyendo que era real

2024-01-03
LaPatilla.com
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly used to create a virtual model that interacts on social media, leading to real people being misled and engaging with a non-existent person. This deception can cause harm to individuals' emotional well-being and trust, which qualifies as harm to communities or individuals. Since the AI system's use directly led to this harm, this event qualifies as an AI Incident under the framework.
Thumbnail Image

Fotos: Ella es la modelo que causa furor entre futbolistas y empresarios

2024-01-06
www.vanguardia.com
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved as it generates a virtual model persona. The use of this AI system has directly led to deception of individuals, including celebrities and businessmen, who interact with the AI-generated persona under false pretenses. This constitutes a harm related to misinformation and deception, which can be considered harm to communities or individuals. Therefore, this event qualifies as an AI Incident due to the realized harm of deception and potential emotional or reputational harm to the individuals involved.
Thumbnail Image

Futbolistas y empresarios millonarios invitaron a salir a modelo creada con IA: su creador revela detalles | El Popular

2024-01-05
Diario El Popular
Why's our monitor labelling this an incident or hazard?
The event describes an AI system (an AI-generated virtual model) that was developed and used to create a realistic persona on social media, which directly led to social deception and emotional manipulation of real individuals, including wealthy and famous people. This constitutes harm to communities and individuals through misinformation and deception, fulfilling the criteria for an AI Incident. The AI system's use directly led to these harms, as people were misled into believing the AI persona was a real human, resulting in social and emotional consequences.
Thumbnail Image

Estrellas del fútbol tratan de ligar con una modelo creada por Inteligencia Artificial creyendo que es una chica real

2024-01-04
MARCA
Why's our monitor labelling this an incident or hazard?
An AI system (ChatGPT or similar generative AI) was used to create a realistic model persona, which led to people being misled and attempting to form personal relationships with a non-existent individual. This deception can be considered a harm to individuals (harm to communities or individuals through misinformation or deception). Since the AI's use directly led to this misleading interaction and potential emotional or social harm, this qualifies as an AI Incident.
Thumbnail Image

¿Quién es Emily Pellegrini? La modelo creada por IA con la que tratan de ligar estrella del deporte y millonarios confundidos

2024-01-04
MARCA
Why's our monitor labelling this an incident or hazard?
The event involves an AI system creating a realistic virtual model that deceives real people into believing she exists, which is a use of AI leading to potential harm through misinformation and emotional manipulation. Although no direct harm is reported yet, the plausible risk of harm through such deception is credible. Therefore, this qualifies as an AI Hazard because the AI system's use could plausibly lead to harm, but no actual harm has been documented in the article.
Thumbnail Image

Insólito: creó una modelo con inteligencia artificial y estrellas del deporte la invitaron a salir creyendo que era real

2024-01-02
infobae
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly involved as it generates a realistic virtual model that deceives real people, including public figures, into believing the persona is real. This deception has led to real interactions and proposals, indicating direct harm in terms of misleading individuals and potentially causing emotional or reputational harm. The AI's role is pivotal in creating and sustaining this false identity. Although no physical harm or legal violations are reported, the social deception and its consequences fit within the scope of AI Incident as harm to communities or individuals. Therefore, this event is classified as an AI Incident.
Thumbnail Image

Quién es Emily Pellegrini, la "modelo más popular del mundo" que llama la atención de futbolistas y multimillonarios que la invitan a viajes y salidas a restaurantes caros, ¿es real?

2024-01-04
EL UNIVERSO
Why's our monitor labelling this an incident or hazard?
The article focuses on an AI-generated virtual influencer and the social phenomenon around it. While the AI system is involved in creating a realistic persona that some users believe to be real, there is no evidence of injury, rights violations, or other harms caused by this AI system. The event does not describe any incident or hazard but rather informs about the existence and social reception of AI-generated models. Therefore, it fits best as Complementary Information, providing context and understanding of AI's societal impact without reporting an AI Incident or AI Hazard.
Thumbnail Image

Emily Pellegrini, la modelo que no existe y enloquece a deportistas famosos y millonarios

2024-01-03
Perfil
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system generating realistic images and videos of a non-existent model, which is used to deceive real people, including famous and wealthy individuals. This deception constitutes harm to individuals and communities by misleading them and potentially causing emotional or financial harm. The AI system's use is central to the event, and the harm is realized, not merely potential. Although no physical injury or legal violation is reported, the deception and manipulation of individuals through AI-generated content meet the criteria for harm to communities or individuals. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Modelo creada con inteligencia artificial enloquece a deportistas famosos y millonarios: fotos y videos

2024-01-03
Noticias RCN | Noticias de Colombia y el Mundo
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems creating virtual models, which fits the definition of an AI system. However, there is no mention of any realized harm or violation caused by these AI models, only potential concerns and societal debates. The interactions described (e.g., men messaging the AI model) do not constitute harm as per the definitions, since no injury, rights violation, or other significant harm is reported. The article focuses on describing the phenomenon and its implications rather than reporting an incident or hazard. Thus, it fits the category of Complementary Information, as it enhances understanding of AI's societal impact without describing a specific AI Incident or AI Hazard.
Thumbnail Image

La modelo que impactó en las redes sociales: las estrellas del deporte la invitaron a salir, creyeron que era real, pero todo fue una creación de la IA

2024-01-03
Semana.com Últimas Noticias de Colombia y el Mundo
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems used to generate realistic virtual models that interact on social media. The AI's use led to deception of users, including public figures, but no actual harm such as physical injury, rights violations, or other significant harms is reported. The event focuses on the phenomenon and the developer's work rather than any incident of harm or a credible risk of harm. Thus, it does not meet the criteria for an AI Incident or AI Hazard. Instead, it provides valuable insight into AI's societal implications, fitting the definition of Complementary Information.
Thumbnail Image

Así es Emily, la modelo creada por IA que ha engañado a futbolistas y atletas de élite

2024-01-03
La Razón
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system generating a realistic virtual persona that has successfully deceived real people, including famous athletes, into believing she is real. This deception constitutes a harm to individuals and communities by misleading them, which fits the definition of an AI Incident under violations of rights or harm to communities. The AI system's use directly led to this harm through its outputs and interactions. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Una modelo creada con inteligencia artificial atrae la atención de estrellas del deporte, creyendo que es una persona real

2024-01-02
Diario La Gaceta
Why's our monitor labelling this an incident or hazard?
The AI system (an AI-generated model) is clearly involved, but the event does not describe any realized or potential harm as defined by the framework. The attention attracted by the AI model does not constitute harm to individuals or communities, nor does it imply violations of rights or other significant harms. Therefore, this is a general AI-related news item providing contextual information about AI capabilities and social phenomena, fitting the category of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

Creó una modelo con inteligencia artificial y millonarios y estrellas del deporte la invitaron a salir

2024-01-02
HoyBolivia.com - El primer Periódico Digital de Bolivia
Why's our monitor labelling this an incident or hazard?
The AI system is clearly involved in generating virtual models that interact with real users, which fits the definition of an AI system. However, the article does not describe any injury, rights violation, or other harm caused by this AI use. The financial transactions on Fanvue are conducted with full knowledge that the models are virtual, so no deception causing harm is reported. The article mainly informs about the phenomenon and its implications, without reporting an incident or hazard. Thus, it is Complementary Information rather than an Incident or Hazard.
Thumbnail Image

Calciatori ingannati dalla modella creata con l'intelligenza artificiale: pubblicata la chat

2024-01-02
Fanpage
Why's our monitor labelling this an incident or hazard?
An AI system was used to create a highly realistic virtual influencer that misled real people, including famous athletes, into believing they were interacting with a real human. This deception constitutes harm to individuals and communities by violating trust and potentially infringing on rights related to informed consent and personal interaction. The AI's role was pivotal in generating the false persona and enabling the misleading interactions. Therefore, this qualifies as an AI Incident due to realized harm caused by the AI system's use.
Thumbnail Image

Emily Pellegrini, curve perfette e 150mila follower. Ma non esiste: è solo IA

2024-01-05
Affari Italiani
Why's our monitor labelling this an incident or hazard?
The article explicitly states that Emily Pellegrini is an AI-generated persona created using generative AI (ChatGPT and image generation). The AI system is used to create realistic images and persona that influence social media and commercial interactions. However, there is no mention of any injury, rights violation, misinformation causing harm, or other harms. The event does not describe any direct or indirect harm caused by the AI system, nor does it indicate a plausible future harm scenario. It mainly provides information about the AI system's use and its social and economic impact, which fits the definition of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

Chi è Emily Pellegrini, l'influencer creata con l'intelligenza artificiale

2024-01-03
Optimagazine: ultime news, video e notizie italiane e dal mondo
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT and possibly other generative AI tools) used to create a fictional influencer persona. While this is a novel use of AI, the article does not report any injury, rights violation, disruption, or other harms caused by this AI-generated influencer. The interactions described are social and do not indicate deception leading to harm or legal violations. Therefore, this is a general AI-related news item about AI-generated content and its social impact, which fits the category of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

Tutti pazzi per Emily Pellegrini, ma l'influencer è solo una creazione dell'intelligenza artificiale - La voce del Trentino

2024-01-05
La Voce Del Trentino
Why's our monitor labelling this an incident or hazard?
An AI system is involved as the influencer is an AI-generated creation. However, the article does not report any actual harm caused by this AI-generated persona, nor does it describe any incident where the AI system's use has directly or indirectly led to injury, rights violations, or other harms. The article suggests a need for reflection on AI risks but does not document realized harm or a specific imminent threat. Therefore, this event is best classified as Complementary Information, providing context and raising awareness about AI-generated content and its societal implications without reporting a concrete AI Incident or Hazard.
Thumbnail Image

Calciatori innamorati della (finta) modella Emily Pellegrini: "Com'è possibile che una donna così bella non abbia un ragazzo?" - Il Fatto Quotidiano

2024-01-04
Il Fatto Quotidiano
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly used to create a realistic fake influencer persona that deceived real people into believing she was a real person. This involves the use of generative AI for creating content and persona. While the article does not describe any direct harm occurring yet, the situation plausibly could lead to harms such as emotional harm, deception, or reputational damage. Since no actual harm is reported, but the AI-generated persona's use could plausibly lead to harm, this fits the definition of an AI Hazard rather than an AI Incident. It is not merely complementary information because the main focus is on the AI-generated persona and its social impact potential, not on responses or ecosystem context. Therefore, the classification is AI Hazard.
Thumbnail Image

وكالة سرايا : خدعت نجوم كرة طلبوا مواعدتها .. تعرّفوا إلى المرأة التي يحلم بها أي رجل

2024-01-05
(وكالة أنباء سرايا (حرية سقفها السماء
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as an AI-generated virtual model used to impersonate a human and deceive real people, including celebrities. This deception has led to direct harm by misleading individuals into forming false relationships and offering money or dates, which can be considered harm to persons and possibly a violation of rights. Therefore, this qualifies as an AI Incident due to the realized harm caused by the AI system's use.
Thumbnail Image

الأكثر إثارة في العالم، كيف خدعت حسناء أغنياء ونجوم كرة طلبوا مواعدتها (صور)

2024-01-05
بوابة فيتو
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system creating a highly realistic virtual model that fooled real people, including celebrities, into interacting with it as if it were a real person. This deception led to people offering dates and private trips, indicating emotional and possibly financial harm. The AI system's use directly caused this harm by generating false representations and misleading users. Therefore, this qualifies as an AI Incident under the definition of harm to people and communities caused directly by the AI system's use.
Thumbnail Image

الذكاء الاصطناعي يورط نجوم كرة ومشاهير في أزمة.. ما علاقة عارضة الأزياء المزيفة؟

2024-01-07
الوطن
Why's our monitor labelling this an incident or hazard?
The event involves AI systems that generate highly realistic synthetic images and personas used on social media to deceive real people, including celebrities. This deception has directly led to harm by misleading individuals, causing reputational and emotional damage, and enabling financial exploitation. The AI system's use here is central to the harm, fulfilling the criteria for an AI Incident under violations of rights and harm to communities. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

قصة دمية "ذكية" خدعت نجوم كرة القدم والفنون القتالية

2024-01-03
العين الإخبارية
Why's our monitor labelling this an incident or hazard?
The event involves an AI system that generated a realistic virtual persona used to deceive real people, including celebrities, into believing they were interacting with a real human. This deception constitutes harm to persons by misleading them and potentially causing emotional or reputational damage. The AI system's use directly led to this harm. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information. The harm is realized, not just potential, as the deception has already occurred.
Thumbnail Image

حسناء الذكاء الاصطناعي تخدع نجوم كرة القدم

2024-01-06
الأيام
Why's our monitor labelling this an incident or hazard?
An AI system was used to create a fake identity (the AI doll) that successfully deceived people, leading to direct harm in the form of manipulation and potential emotional or financial damage. This fits the definition of an AI Incident because the AI system's use directly led to harm to persons (emotional or social harm) through deception.
Thumbnail Image

مصنوعة بالذكاء الاصطناعي.. حسناء تخدع مشاهير ولاعبي كرة قدم حول العالم

2024-01-05
قناه السومرية العراقية
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (AI-generated virtual models) used to create realistic personas that have deceived real people, including celebrities and athletes. The deception has already occurred, causing harm by misleading individuals into believing in false identities, which can lead to emotional, reputational, or financial harm. The AI system's use is central to this harm, fulfilling the criteria for an AI Incident rather than a hazard or complementary information. The harm is realized, not just potential, as the article describes actual interactions and invitations based on the AI-generated persona.
Thumbnail Image

عُرِضَ عليها مواعيد غراميّة ورحلات خاصّة... من هي الجميلة إيميلي بيليغريني التي خدعت الأغنياء والمشاهير؟

2024-01-05
صيدا أون لاين :: Saidaonline
Why's our monitor labelling this an incident or hazard?
The AI system here is the AI-generated virtual model (a form of AI content generation and persona creation) that was used to deceive people, including celebrities and athletes. This deception led to direct harm in the form of misleading and manipulating individuals, which can be considered harm to communities and individuals' emotional well-being. Therefore, this qualifies as an AI Incident because the AI system's use directly led to harm through deception and manipulation.
Thumbnail Image

مصنوعة بالذكاء الاصطناعي.. حسناء تخدع مشاهير ولاعبي كرة قدم حول العالم - شبكة لالش الاعلامية

2024-01-05
شبكة لالش الاعلامية
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as an AI-generated virtual model that interacts with real people, deceiving them into believing it is a real human. The AI's use has directly led to harm in the form of deception and manipulation of individuals, including celebrities and athletes, which fits the definition of an AI Incident under harm to communities or individuals. The harm is realized, not just potential, as the deception has already occurred and influenced behavior. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

A primit sute de mesaje de la fotbaliști și miliardari: "A invitat-o la Dubai" " Surpriza de proporții despre tânăra care frânge inimi

2024-01-05
Gazeta Sporturilor
Why's our monitor labelling this an incident or hazard?
An AI system was used to create a realistic fake persona that deceived real individuals, including celebrities and billionaires, into believing they were interacting with a real person. This deception can be considered harm to communities or individuals through misinformation and manipulation, as it undermines trust and could lead to emotional or reputational harm. Since the AI system's use directly led to this deceptive interaction and potential harm, this qualifies as an AI Incident under the framework.
Thumbnail Image

Emily Pellegrini, "cel mai fierbinte model din lume" care a atras atenția fotbaliștilor celebri și miliardarilor: "Cum e posibil să n-ai iubit?"

2024-01-05
Libertatea
Why's our monitor labelling this an incident or hazard?
The article details the use of an AI system to create a virtual influencer who has gained popularity and financial success. However, it does not describe any direct or indirect harm resulting from the AI system's development or use. There is no mention of injury, rights violations, misinformation, or other harms. The content is primarily informational about the AI system's social and commercial impact, without reporting incidents or hazards. Therefore, it fits best as Complementary Information, providing context and insight into AI applications and their societal effects without describing an incident or hazard.
Thumbnail Image

"Țeapă" nebănuită pentru sportivi și miliardari celebri. Povestea fotomodelului inexistent care i-a păcălit pe bărbați FOTO

2024-01-05
Ziare.com
Why's our monitor labelling this an incident or hazard?
An AI system is clearly involved as the photomodel is generated by AI. The event involves the use of AI to create realistic images and persona that deceive people into believing the model is real. However, the article does not report any direct harm such as fraud, financial loss, or rights violations resulting from this deception. The focus is on describing the phenomenon and its social impact rather than a specific harmful incident or a credible risk of harm. Thus, it fits the definition of Complementary Information, as it provides supporting context about AI-generated content and its societal effects without constituting an AI Incident or AI Hazard.
Thumbnail Image

Surpriză de proporții! Cine e frumoasa Emily Pellegrini!

2024-01-05
national.ro
Why's our monitor labelling this an incident or hazard?
The article focuses on an AI-generated virtual influencer gaining popularity and followers on social media. While the AI system was used to create realistic content and deceive some users into believing the persona is real, the article does not report any harm or violation resulting from this. There is no mention of injury, rights violations, misinformation causing harm, or other significant negative impacts. Therefore, this event does not meet the criteria for an AI Incident or AI Hazard. It is best classified as Complementary Information because it provides context about AI's use in social media and influencer culture without describing harm or plausible harm.
Thumbnail Image

Cine este, de fapt, Emily Pellegrini și cum s-au păcălit miliardarii care au crezut că este reală

2024-01-05
digisport.ro
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly used to create a realistic virtual model that deceived real people, including celebrities, into believing she was a real person. This deception led to interactions based on false information, which can be considered a violation of trust and potentially a harm to the individuals involved. The AI's use in generating a fake persona that misled others constitutes an AI Incident because the AI system's use directly led to harm in the form of deception and possible emotional or reputational damage. The article describes realized harm rather than just potential harm, as the celebrities engaged with the AI-generated persona believing it to be real.
Thumbnail Image

Emily Pellegrini a păcălit mai multe personalități că este reală

2024-01-06
Puterea.ro
Why's our monitor labelling this an incident or hazard?
The event involves an AI system generating a realistic virtual persona that has fooled many people, which fits the definition of an AI system. However, the article does not report any direct or indirect harm such as physical injury, rights violations, or other significant harms caused by the AI system's use. The deception and financial gain are noted, but without evidence of harm or legal breach, this does not meet the threshold for an AI Incident or AI Hazard. It is primarily a report on the use and impact of an AI system, thus it is best classified as Complementary Information.
Thumbnail Image

Un fotbalist a crezut că o femeie creată de AI este reală. Ce mesaje a putut să-i trimită

2024-01-09
digisport.ro
Why's our monitor labelling this an incident or hazard?
The event involves an AI system generating a realistic fictional person used on social media, which has led to people being deceived. This fits the definition of an AI system and its use. However, the article does not describe any actual harm occurring, only that people believed the persona was real and interacted with it. There is no mention of injury, rights violations, or other harms materializing. Therefore, this is best classified as an AI Hazard, since the AI-generated persona could plausibly lead to harms such as deception, emotional harm, or reputational damage, but no incident has yet occurred as per the article.