Meta's Hyper-Realistic Kendall Jenner AI Chatbot Sparks Public Concern

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Meta launched an AI chatbot named Billie, modeled after Kendall Jenner, on Instagram. The chatbot's hyper-realistic likeness and mannerisms have caused public unease and raised concerns about potential misuse of celebrity identities and unauthorized AI-generated content, though no direct harm has yet occurred.[AI generated]

Why's our monitor labelling this an incident or hazard?

An AI system is clearly involved as the chatbots use AI to generate realistic conversations and mimic celebrity appearances and voices. The event involves the use of AI systems in a way that has led to public concern and criticism, primarily about the potential for deception and emotional harm to users. However, there is no direct or indirect evidence of realized harm such as injury, rights violations, or disruption of infrastructure. The concerns are about potential psychological discomfort and ethical issues, but no concrete harm has been reported. Therefore, this event represents a plausible risk scenario where the AI system's use could lead to harm, qualifying it as an AI Hazard rather than an AI Incident or Complementary Information.[AI generated]
AI principles
Transparency & explainabilityPrivacy & data governanceRespect of human rightsAccountabilitySafety

Industries
Media, social platforms, and marketing

Affected stakeholders
General publicOther

Harm types
ReputationalPsychologicalHuman or fundamental rights

Severity
AI hazard

Business function:
Marketing and advertisement

AI system task:
Interaction support/chatbotsContent generation


Articles about this incident or hazard

Thumbnail Image

Kendall Jenner se convierte en la IA de un chat y llueven las críticas

2023-10-13
20 minutos
Why's our monitor labelling this an incident or hazard?
An AI system is clearly involved as the chatbots use AI to generate realistic conversations and mimic celebrity appearances and voices. The event involves the use of AI systems in a way that has led to public concern and criticism, primarily about the potential for deception and emotional harm to users. However, there is no direct or indirect evidence of realized harm such as injury, rights violations, or disruption of infrastructure. The concerns are about potential psychological discomfort and ethical issues, but no concrete harm has been reported. Therefore, this event represents a plausible risk scenario where the AI system's use could lead to harm, qualifying it as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

¿Es o no es Kendall Jenner? La realidad detrás de los videos de celebridades en Instagram con Meta IA

2023-10-12
www.elcolombiano.com
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved in generating hyperrealistic videos and chatbots that impersonate celebrities. While the article does not report any direct harm such as injury, rights violations, or disruption, the use of AI to create realistic fake personas can plausibly lead to harms such as misinformation, identity misuse, or reputational damage. However, since no actual harm or incident is reported as having occurred, and the article mainly describes the existence and user reactions to these AI-generated profiles, this event fits best as Complementary Information. It provides context and raises awareness about AI capabilities and societal reactions without documenting a realized AI Incident or a clear AI Hazard.
Thumbnail Image

This isn't Kendall Jenner: People are freaking out over Meta's...

2023-10-13
New York Post
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (hyperrealistic AI chatbots) and their use, but no direct or indirect harm has been reported as having occurred. The concerns expressed are about potential misuse and reputational harm from unauthorized AI-generated likenesses, which is a plausible future harm but not realized in this case. Therefore, this qualifies as an AI Hazard because the development and use of these AI chatbots could plausibly lead to incidents such as identity misuse or reputational damage. It is not Complementary Information because the main focus is not on responses or governance but on the introduction and public reaction to these AI systems. It is not an AI Incident because no actual harm has been documented yet.
Thumbnail Image

'Creepy' detail in new Kendall Jenner video

2023-10-15
News.com.au
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems—hyper-realistic AI chatbots that replicate celebrity likenesses and personalities. The event does not describe any realized harm but highlights public unease and a specific example of unauthorized AI-generated content (Tom Hanks' AI likeness used without consent). These factors indicate plausible future harms such as rights violations and reputational damage. Since no direct or indirect harm has yet occurred but the potential for harm is credible and recognized, the event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Kendall Jenner Is Now an AI Chatbot on Instagram: See the Creepy Clip

2023-10-11
Toofab
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems (chatbots using AI to simulate conversations), but there is no indication that these AI systems have caused any injury, rights violations, disruption, or other harms. The users' reactions are subjective feelings of discomfort, which do not constitute harm as defined. There is also no mention of plausible future harm or risks beyond general unease. Hence, the event is not an AI Incident or AI Hazard. Instead, it is a report on the deployment of AI chatbots and public response, fitting the definition of Complementary Information.