AI Chatbot Mimics Murdered Teen, Family Outraged

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A family is outraged after discovering an AI chatbot on Character.ai mimicked their murdered daughter, Jennifer Ann, using her name and image without consent. This incident raises ethical concerns about privacy and the misuse of AI technology, highlighting the need for stricter regulations on personal identity use in AI.[AI generated]

Why's our monitor labelling this an incident or hazard?

The chatbot’s unauthorized use of the teen’s identity and likeness constitutes misuse of an AI system that directly harmed the family’s emotional well-being and violated the deceased’s personal rights.[AI generated]
AI principles
Privacy & data governanceRespect of human rightsTransparency & explainabilityAccountabilityHuman wellbeingSafety

Industries
Media, social platforms, and marketing

Affected stakeholders
Children

Harm types
Human or fundamental rightsPsychologicalReputational

Severity
AI incident

AI system task:
Interaction support/chatbotsContent generation

In other databases

Articles about this incident or hazard

Thumbnail Image

AI chatbot mimics deceased teen: Family shocked and outraged, as ethical concern arises | Mint

2024-10-07
mint
Why's our monitor labelling this an incident or hazard?
The chatbot’s unauthorized use of the teen’s identity and likeness constitutes misuse of an AI system that directly harmed the family’s emotional well-being and violated the deceased’s personal rights.
Thumbnail Image

Girl murdered in 2006 was revived as AI character, family raises objection

2024-10-07
India Today
Why's our monitor labelling this an incident or hazard?
An AI system (Character.ai chatbot) was misused to impersonate a real person who was murdered, leading to emotional harm to her family and a violation of privacy and likeness rights. The misuse has already occurred and caused direct harm, fitting the definition of an AI Incident.
Thumbnail Image

AI Chatbot Mimics Murdered Teen, Family Outraged Over Invasion of Privacy

2024-10-07
TimesNow
Why's our monitor labelling this an incident or hazard?
The unauthorized use of a deceased individual’s identity and likeness by an AI system constitutes a violation of personal and human rights, leading directly to emotional harm for the family. This meets the criteria for an AI Incident under ‘violations of human rights or a breach of applicable law intended to protect fundamental rights.’
Thumbnail Image

AI revives murdered girl from 2006, leaving her family stunned | - Times of India

2024-10-08
The Times of India
Why's our monitor labelling this an incident or hazard?
This event describes the actual use of an AI system to create an unauthorized, non-consensual chatbot representation of a murdered girl, infringing on privacy and dignity. The AI’s deployment directly led to harm (ethical breach, emotional distress, violation of personal rights).
Thumbnail Image

An AI chatbot of a girl murdered in 2006 was created on a hugely popular service -- and her family had no idea

2024-10-08
DNyuz
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system (Character.ai chatbot) that was used to create a persona of a deceased person without consent, leading to emotional harm to the family, which is a form of harm to persons. The AI system's use directly led to this harm, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, as the family experienced distress and trauma. The company's inadequate response further compounds the issue. Therefore, this is classified as an AI Incident due to the direct harm caused by the AI system's use.
Thumbnail Image

An AI chatbot of a girl murdered in 2006 was created on a hugely popular service -- and her family had no idea

2024-10-08
Business Insider Africa
Why's our monitor labelling this an incident or hazard?
An AI system (Character.ai chatbot) was used to create a persona of a deceased person without consent, leading to emotional harm to the family, which qualifies as harm to persons (a). The chatbot's existence and use directly caused distress and trauma to the family, fulfilling the criteria for an AI Incident. The event is not merely a potential risk but a realized harm, as the chatbot was used in at least 69 chats and caused significant emotional distress. Therefore, this is classified as an AI Incident.
Thumbnail Image

Family outraged as AI chatbot mimics murdered daughter

2024-10-08
The Business Standard
Why's our monitor labelling this an incident or hazard?
An AI system (the chatbot on Character.ai) was used to impersonate a real person who was murdered, using her name and image without consent. This misuse of AI directly caused emotional harm to the family, violating their privacy and causing distress. The harm is realized and significant, fitting the definition of an AI Incident under violations of human rights or breach of obligations protecting fundamental rights (privacy and dignity). The event involves the use of an AI system and the harm is direct and materialized, not just potential. Therefore, it qualifies as an AI Incident.
Thumbnail Image

His daughter was murdered. Then she reappeared as an AI chatbot.

2024-10-15
Washington Post
Why's our monitor labelling this an incident or hazard?
The event involves a generative AI system (Character.AI) whose misuse (user-created chatbot impersonating a real, deceased individual) directly harmed the victim’s family—violating privacy and causing psychological distress. This meets the definition of an AI Incident (unauthorized impersonation and emotional harm).
Thumbnail Image

His daughter was murdered - then she reappeared as an AI chatbot

2024-10-15
NZ Herald
Why's our monitor labelling this an incident or hazard?
An AI system (Character AI’s generative chatbot) was used to impersonate a real, deceased individual without consent, leading to emotional harm and violations of the family’s rights. The harm is realized (not hypothetical), and the AI’s misuse directly caused distress, fitting the criteria for an AI Incident under violations of rights and personal harm.
Thumbnail Image

His daughter was murdered. Then she reappeared as an AI chatbot

2024-10-15
The Detroit News
Why's our monitor labelling this an incident or hazard?
This is an AI Incident because a deployed AI system (Character.AI’s generative-chatbot platform) directly led to harm: unauthorized impersonation of a deceased minor, misuse of personal data, defamation, and emotional trauma for the family. The incident involves realized harm (psychological and privacy violations) resulting from the AI’s outputs.
Thumbnail Image

Dad Discovers Murdered Daughter Is Now a Chatbot

2024-10-18
Newser
Why's our monitor labelling this an incident or hazard?
The AI system (Character.ai) was used to create a chatbot impersonating a deceased person without consent, which caused emotional distress to the family, constituting harm to a person or group. The platform's failure to prevent or promptly remove such impersonations despite terms prohibiting them indicates a use-related harm. This fits the definition of an AI Incident as the AI system's use directly led to harm (emotional and privacy-related) and violation of rights (consent and dignity).
Thumbnail Image

Character.AI Faces Scrutiny After Chatbot Impersonates Deceased Teen - Blockonomi

2024-10-18
Blockonomi
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly (Character.AI chatbot platform) used to impersonate a deceased person, causing psychological harm to the family, which is a form of harm to persons. The AI system's use directly led to this harm, fulfilling the criteria for an AI Incident. The incident also involves violation of rights related to personal information and consent. Although the company removed the chatbot after notification, the harm had already occurred. The event is not merely a potential risk or complementary information but a realized harm caused by AI use.
Thumbnail Image

His daughter was murdered. Then she reappeared as an AI chatbot.

2024-10-16
Democratic Underground
Why's our monitor labelling this an incident or hazard?
An AI system (the chatbot) was developed and deployed using the deceased daughter's identity without authorization, causing emotional harm to her family and violating their rights. This use of AI directly led to harm (emotional distress and violation of rights), fitting the definition of an AI Incident under violations of human rights or breach of obligations protecting fundamental rights and harm to communities. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Father Disgusted to Find His Murdered Daughter Was Brought Back as an AI

2024-10-19
Futurism
Why's our monitor labelling this an incident or hazard?
An AI system (the chatbot on Character.AI) was used to create a persona of a murdered teen without consent, which caused emotional harm to her family and violated ethical and privacy norms. The AI system's use directly led to harm (emotional distress and violation of rights), fulfilling the criteria for an AI Incident. The event involves the use of generative AI to impersonate a real person, which is a clear violation of rights and causes harm to communities and individuals. The platform's reactive removal after discovery does not negate the harm caused.
Thumbnail Image

Dad's agony after his murdered daughter was turned into AI chatbot

2024-10-19
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
The AI system (the chatbot) was used to create a digital persona of a murdered individual without consent, causing emotional harm to the family. The AI's development and use directly led to this harm, fulfilling the criteria for an AI Incident under violations of rights and harm to communities. The event is not merely a potential risk but a realized harm, as the chatbot was publicly accessible and caused distress. The platform's removal of the bot is a response but does not negate the incident itself.
Thumbnail Image

His daughter was murdered. Then she reappeared as an AI chatbot

2024-10-17
The Age
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved as a chatbot recreating a deceased person. The use of this AI chatbot impersonating a murdered individual without consent constitutes a violation of personal rights and causes emotional harm to the family, fulfilling the criteria for an AI Incident under violations of human rights or breach of obligations intended to protect fundamental rights. The harm is realized, not just potential, as the father experiences distress upon discovering the AI profile.
Thumbnail Image

Dad discovers slain daughter being used as an AI chatbot

2024-10-18
businessdesk.co.nz
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (a chatbot) that uses the identity and likeness of a murdered individual without consent, which is a misuse of AI technology. This use directly leads to harm by violating the dignity and rights of the deceased and causing emotional harm to the family, fitting the definition of an AI Incident under violations of human rights or harm to communities. The AI system's development and use in this manner is central to the harm described.
Thumbnail Image

Dad's Agony After Discovering His Murdered Daughter Had Been Turned Into An AI Chatbot - Ny Breaking News

2024-10-19
NY Breaking News
Why's our monitor labelling this an incident or hazard?
An AI system (the chatbot on Character.AI) was used to create a digital persona of a murdered individual without consent, using her name and likeness, which caused emotional distress to her family. The AI system's development and use directly led to harm (emotional and privacy harm) and a violation of rights (privacy and dignity of the deceased and their family). The platform's failure to prevent such misuse initially and the subsequent removal after complaint further confirm the AI system's role in the incident. Therefore, this event meets the criteria for an AI Incident due to realized harm linked to the AI system's use.
Thumbnail Image

Father devastated to learn that his daughter's remains were used to create an AI chatbot

2024-10-19
Internewscast Journal
Why's our monitor labelling this an incident or hazard?
An AI system (the chatbot on Character.AI) was used to create a digital persona of a deceased individual without consent, which directly caused emotional harm to the father. The AI system's development and use led to a violation of privacy and respect for the deceased, which falls under violations of human rights or related obligations. The harm is realized and significant, involving psychological distress and ethical concerns. Hence, this is an AI Incident rather than a hazard or complementary information.