AI-Generated Microdrama Uses Real Faces Without Consent in China

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

An AI-generated Chinese microdrama, "The Peach Blossom Hairpin," used the likenesses of real individuals, including model Christine Li, without their consent. The show, hosted on ByteDance's Hongguo app, caused reputational harm and distress, prompting legal action and raising concerns over AI misuse and personal rights violations in China.[AI generated]

Why's our monitor labelling this an incident or hazard?

The AI system was used to generate digital twins of real individuals without their consent, directly leading to reputational harm and potential professional damage. The unauthorized use of their likenesses in a public AI-generated drama constitutes a violation of their rights. The harm is realized, not just potential, as the individuals have experienced distress and fear, and the platform had to remove the content after public outcry. Hence, this is an AI Incident due to direct harm caused by the AI system's use.[AI generated]
AI principles
Privacy & data governanceRespect of human rights

Industries
Media, social platforms, and marketingArts, entertainment, and recreation

Affected stakeholders
Workers

Harm types
ReputationalPsychologicalHuman or fundamental rights

Severity
AI incident

Business function:
Other

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

'Clearly me' - Chinese AI drama accused of stealing faces

2026-04-24
RTE.ie
Why's our monitor labelling this an incident or hazard?
The AI system was used to generate digital twins of real individuals without their consent, directly leading to reputational harm and potential professional damage. The unauthorized use of their likenesses in a public AI-generated drama constitutes a violation of their rights. The harm is realized, not just potential, as the individuals have experienced distress and fear, and the platform had to remove the content after public outcry. Hence, this is an AI Incident due to direct harm caused by the AI system's use.
Thumbnail Image

'Clearly me': AI drama accused of stealing faces

2026-04-24
KTBS
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated content that used the likenesses of real people without consent, causing reputational harm and potential legal violations of portrait and reputation rights. The AI system's use directly led to harm to individuals' rights and reputations, fulfilling the criteria for an AI Incident. The harm is not hypothetical but has occurred, as evidenced by the individuals' distress and the platform's removal of the content. This is a clear case of AI misuse causing harm to persons, fitting the definition of an AI Incident.
Thumbnail Image

'Clearly me': AI drama accused of stealing faces

2026-04-24
The Anniston Star
Why's our monitor labelling this an incident or hazard?
The article describes an AI-generated show that used the faces of real people without their consent, which is a direct violation of their rights. The AI system's use in generating these likenesses without permission has caused harm to the individuals' personal and possibly intellectual property rights. This fits the definition of an AI Incident as it involves the use of an AI system leading to a breach of obligations intended to protect fundamental and intellectual property rights.
Thumbnail Image

'Clearly me': AI drama accused of stealing faces

2026-04-24
Digital Journal
Why's our monitor labelling this an incident or hazard?
The article describes an AI-generated microdrama that used the likenesses of real people without their consent, resulting in reputational harm and emotional distress. The AI system's development and use directly caused these harms by generating unauthorized digital twins portraying the individuals negatively. This constitutes a violation of personal rights and legal protections, fulfilling the criteria for an AI Incident. The harm is realized and ongoing, not merely potential, and the AI system's role is pivotal in causing the harm.
Thumbnail Image

'Clearly me': AI drama accused of stealing faces

2026-04-24
Mail Online
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system generating microdramas that used real individuals' likenesses without consent, causing reputational harm and potential legal violations. The harm is direct and realized, as the affected individuals experienced distress and fear, and the content was removed due to violations. The AI system's use in creating unauthorized digital likenesses that led to harm fits the definition of an AI Incident under violations of human rights and harm to individuals. The involvement of the AI system in producing the harmful content is clear and central to the incident.