AI-Generated Digital Exes Spark Privacy and Emotional Concerns in China

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A growing trend in China sees young people using AI to create digital replicas of ex-partners by uploading personal data such as chat logs and photos. While these virtual exes offer emotional comfort, the practice raises significant concerns about privacy, emotional dependency, and ethical boundaries.[AI generated]

Why's our monitor labelling this an incident or hazard?

The AI system is clearly involved as it generates digital replicas based on personal data. The use of AI here is for emotional coping, which is novel but does not currently report any realized harm such as privacy breaches, emotional injury recognized legally, or other harms. The concerns raised are about potential privacy and emotional dependency issues, which represent plausible future risks rather than confirmed harms. Therefore, this event fits best as an AI Hazard, since the AI use could plausibly lead to harms related to privacy and emotional well-being, but no direct or indirect harm has been reported yet.[AI generated]
AI principles
Privacy & data governanceHuman wellbeing

Industries
Consumer services

Affected stakeholders
ConsumersOther

Harm types
Human or fundamental rightsPsychological

Severity
AI hazard

AI system task:
Content generationInteraction support/chatbots


Articles about this incident or hazard

Thumbnail Image

China's Youth Recreate Ex-Partners Using AI After Breakups, Raising Privacy Concerns

2026-05-02
NDTV
Why's our monitor labelling this an incident or hazard?
The AI system is clearly involved as it generates digital replicas based on personal data. The use of AI here is for emotional coping, which is novel but does not currently report any realized harm such as privacy breaches, emotional injury recognized legally, or other harms. The concerns raised are about potential privacy and emotional dependency issues, which represent plausible future risks rather than confirmed harms. Therefore, this event fits best as an AI Hazard, since the AI use could plausibly lead to harms related to privacy and emotional well-being, but no direct or indirect harm has been reported yet.
Thumbnail Image

Why Are People In China Creating AI Clones Of Their Ex-Partners? Trend Triggers Privacy Debate

2026-05-03
NDTV
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly involved as it generates digital replicas of ex-partners from personal data. The concerns raised relate to privacy violations and emotional infidelity, which could plausibly lead to violations of rights or harm to individuals' emotional health. However, the article does not report any actual harm or legal violations occurring yet, only a debate and concerns about potential issues. Hence, it fits the definition of an AI Hazard, where the use of AI could plausibly lead to harm but no incident has been documented.
Thumbnail Image

In China, an AI dating trend that lets users simulate conversations with exes

2026-05-02
The Indian Express
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems used to simulate conversations with ex-partners, indicating AI system involvement. However, it only discusses concerns and potential issues without reporting any realized harm such as privacy violations, emotional harm, or other negative consequences. Since no harm has yet occurred but plausible risks exist, the event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because AI systems are central to the described practice.
Thumbnail Image

Virtual Partners: This breakup trend is raising eyebrows: People are now recreating their exes with AI - The Times of India

2026-05-03
The Times of India
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system explicitly described as creating virtual versions of ex-partners based on personal data. The system's use is leading to concerns about privacy and emotional dependency, which are potential harms. However, the article does not document any realized harm or incidents caused by the AI system. Hence, it fits the definition of an AI Hazard, where the AI's use could plausibly lead to harm but has not yet done so. It is not Complementary Information because the main focus is on the emerging trend and its risks, not on updates or responses to a prior incident. It is not an AI Incident because no actual harm has been reported. It is not Unrelated because the AI system and its potential impacts are central to the article.
Thumbnail Image

China trend of digital exes to help people heal from heartbreak sparks debate

2026-05-02
South China Morning Post
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly involved in generating digital ex-partners that mimic former partners' behavior. However, the article only discusses the trend and the debates it has sparked, without reporting any actual injury, rights violations, or other harms caused by the AI system. The concerns about privacy and emotional dependency indicate potential future risks, making this an AI Hazard rather than an Incident. Since the article focuses on the trend and its implications rather than a specific harm or response, it is not Complementary Information. Hence, the classification is AI Hazard.
Thumbnail Image

Move Over Dating Apps: AI-Powered 'Digital Exes' Are Becoming China's New Break-Up Trend

2026-05-02
International Business Times, Singapore Edition
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems used to recreate digital ex-partners, which fits the definition of an AI system. The use of personal data to train these AI models raises privacy concerns and potential emotional harm. However, the article does not describe any actual incidents of harm occurring, only concerns and debates about possible negative consequences. Thus, the event is best classified as an AI Hazard, reflecting the plausible future risk of harm from this AI application rather than an AI Incident or Complementary Information.