Deepfake AI Pornography Case in Taiwan Highlights Legal Gaps for Victims

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

YouTuber Xiao Yu used AI Deepfake technology to create and sell non-consensual pornographic videos featuring celebrities like Cheng Chia-chun, causing psychological and reputational harm. Despite a 5-year prison sentence and confiscation of criminal profits, current Taiwanese law prevents victims from receiving compensation, exposing legal gaps amid rising AI-related abuse.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event explicitly involves the malicious use of an AI system (Deepfake technology) to create harmful content without consent, causing psychological harm and violating rights of individuals. The perpetrator's use of AI directly led to these harms, and the article discusses the legal and social consequences. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm to persons and violations of rights.[AI generated]
AI principles
Respect of human rightsPrivacy & data governance

Industries
Media, social platforms, and marketing

Affected stakeholders
General public

Harm types
PsychologicalReputationalHuman or fundamental rights

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

小玉Deepfake案判5年已入獄!鄭家純揭殘酷現況:還得自付律師費 | 娛樂 | NOWnews今日新聞

2026-05-13
NOWnews 今日新聞
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the malicious use of an AI system (Deepfake technology) to create harmful content without consent, causing psychological harm and violating rights of individuals. The perpetrator's use of AI directly led to these harms, and the article discusses the legal and social consequences. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm to persons and violations of rights.
Thumbnail Image

鄭家純遭換臉控小玉「始終沒道歉」! 受害者零賠償...還得自付律師費 | ETtoday星光雲 | ETtoday新聞雲

2026-05-13
ETtoday星光雲
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system (Deepfake technology) to create manipulated videos that caused harm to individuals (violation of rights and personal harm). The criminal use of this AI system led to a conviction and imprisonment, confirming direct harm. Although the article also discusses legal and compensation issues, the core event is an AI Incident because the AI system's malicious use directly caused harm to the victims.
Thumbnail Image

小玉換臉A片爽賺1338萬!鄭家純痛揭殘酷現況:錢無法賠給受害者 | 噓!星聞

2026-05-13
聯合新聞網 udn.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system (Deepfake technology) to create harmful manipulated videos without consent, leading to violations of rights and harm to individuals. The AI system's use directly caused harm, fulfilling the criteria for an AI Incident. The article discusses the harm caused, legal consequences, and ongoing challenges for victims, confirming that harm has materialized rather than being a potential future risk.
Thumbnail Image

遭換臉賣謎片 雞排妹曝小玉現況:犯罪所得不會賠償給受害者們 | 娛樂星聞 | 三立新聞網 SETN.COM

2026-05-13
三立新聞
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system (DeepFake technology) to produce harmful content without consent, leading to violations of rights and harm to individuals. The AI system's use directly caused harm through the creation and distribution of non-consensual deepfake pornography, which is a form of online violence and rights violation. The criminal has been convicted and is serving a sentence, confirming the harm has materialized. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use directly led to harm to persons and communities.
Thumbnail Image

小玉換臉A片撈1338萬充公 鄭家純等受害者沒份!原因曝光|壹蘋新聞網

2026-05-13
壹蘋新聞網
Why's our monitor labelling this an incident or hazard?
The article describes the malicious use of AI-based face-swapping technology to create non-consensual deepfake pornography, which directly harms the victims by violating their rights and causing personal and reputational damage. The AI system's use led to criminal profits and legal action. Although the confiscated proceeds are not being compensated to victims due to legal technicalities, the harm caused by the AI system's misuse is realized and ongoing. Therefore, this event meets the criteria for an AI Incident due to violations of human rights and harm to individuals caused by the AI system's use.