Taiwanese YouTuber Used AI Deepfake Technology to Create and Sell Non-Consensual Pornographic Videos, Sparking Major Legal Case

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

YouTuber Zhu Yuchen (Xiao Yu) and an accomplice used AI deepfake software to create and sell non-consensual pornographic videos featuring the faces of 119 public figures, causing severe psychological harm and privacy violations. The case led to criminal charges, civil lawsuits, and calls for stricter digital crime laws in Taiwan.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly describes the use of DeepFake AI technology to create illicit face-swapped videos without consent, which constitutes a violation of human rights and digital sexual violence. The harm is realized and ongoing, with victims suffering reputational and psychological damage. The involvement of AI in generating the harmful content directly led to the incident, fulfilling the criteria for an AI Incident under violations of rights and harm to individuals.[AI generated]
AI principles
Privacy & data governanceRespect of human rightsHuman wellbeingAccountabilitySafetyTransparency & explainabilityRobustness & digital securityDemocracy & human autonomy

Industries
Media, social platforms, and marketingDigital securityGovernment, security, and defence

Affected stakeholders
General public

Harm types
PsychologicalHuman or fundamental rightsReputational

Severity
AI incident

Business function:
Sales

AI system task:
Content generationRecognition/object detection


Articles about this incident or hazard

Thumbnail Image

成小玉「換臉謎片主角」!黃捷受害 北上怒斥:寸步不讓 | 社會萬象 | 生活 | NOWnews今日新聞

2022-06-09
NOWnews 今日新聞
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the use of DeepFake AI technology to create illicit face-swapped videos without consent, which constitutes a violation of human rights and digital sexual violence. The harm is realized and ongoing, with victims suffering reputational and psychological damage. The involvement of AI in generating the harmful content directly led to the incident, fulfilling the criteria for an AI Incident under violations of rights and harm to individuals.
Thumbnail Image

換臉A片海撈1338萬!律師建請求刑12年 小玉當庭嚇傻 - 社會

2022-06-10
中時新聞網
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system (Deepfake technology) to create harmful synthetic content without consent, which constitutes a violation of personal rights and causes harm to the victims. The harm is realized and ongoing, as evidenced by the legal actions and victim testimonies. Therefore, this qualifies as an AI Incident due to direct harm caused by the AI system's use.
Thumbnail Image

遭「換臉」變不雅片主角 黃捷出庭:堅決不和解 | 聯合新聞網

2022-06-09
UDN
Why's our monitor labelling this an incident or hazard?
The article describes a case where an AI system (face-swapping software) was used to create and distribute non-consensual explicit videos, directly harming the individuals depicted. This constitutes a violation of personal rights and digital sexual violence, which falls under violations of human rights and harm to individuals. The involvement of AI in producing the harmful content and the resulting legal actions confirm this as an AI Incident.
Thumbnail Image

小玉認換臉賣A片 受害者龍龍發聲:不接受和解 - 自由娛樂

2022-06-10
自由時報電子報
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of deepfake (an AI system) to create manipulated pornographic videos without consent, causing harm to the victims through defamation, privacy violations, and illegal use of personal data. These harms fall under violations of human rights and breach of applicable laws protecting personal data and dignity. The AI system's use directly led to these harms, meeting the criteria for an AI Incident.
Thumbnail Image

百萬YouTuber小玉賣換臉謎片 被害者律師:盼法院重判12年 - 社會 - 自由時報電子報

2022-06-09
Liberty Times Net
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of deepfake AI technology to create manipulated videos without consent, leading to harm to at least 119 victims. The harm includes violations of personal rights, defamation, and privacy breaches, which fall under violations of human rights and legal protections. The AI system's use directly led to these harms, and the legal proceedings and victim testimonies confirm the realized impact. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

遭百萬YouTuber小玉換臉成謎片主角 黃捷:絕不和解 - 臺北市 - 自由時報電子報

2022-06-09
Liberty Times Net
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of deepfake AI technology to swap faces of individuals onto pornographic videos without consent, which is a clear violation of personal rights and privacy. The harm is realized and ongoing, as victims suffer from the distribution and persistence of these videos online, causing psychological and reputational damage. The involvement of AI in the creation of these videos and the resulting harm meets the criteria for an AI Incident under the OECD framework, specifically under violations of human rights and harm to communities. The legal proceedings and calls for legislative action further confirm the seriousness and materialization of harm.
Thumbnail Image

遭百萬YouTuber小玉換臉成謎片主角 黃捷:絕不和解 - 社會 - 自由時報電子報

2022-06-09
Liberty Times Net
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system (deepfake technology) to manipulate facial images and produce harmful content without consent. This use of AI has directly caused harm to individuals' rights and reputations, constituting violations of personal and possibly intellectual property rights. The harm is realized and ongoing, with legal prosecution and victim complaints. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to violations of human rights and harm to individuals and communities.
Thumbnail Image

被小玉換臉謎片 黃捷出庭:不和解 至今網路仍有影片 | 聯合新聞網

2022-06-09
UDN
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI deepfake technology (an AI system) to create and distribute harmful content without consent, leading to violations of personal data protection laws, defamation, and moral harm. The harm is realized and ongoing, as victims continue to suffer from the presence of these videos online. The legal proceedings and victim testimonies confirm the direct link between the AI system's malicious use and the harm caused. This fits the definition of an AI Incident because the AI system's use has directly led to violations of rights and harm to individuals and communities.
Thumbnail Image

深偽技術把名人換臉謎片 小玉求輕判被害人盼判12年 | 時事 | 聯合影音

2022-06-09
聯合影音
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI deepfake technology (an AI system) to create harmful content without consent, leading to violations of personal rights and ongoing harm to many individuals. The AI system's use directly caused harm (violation of rights, defamation, and harm to dignity), fulfilling the criteria for an AI Incident. The legal prosecution and victim testimonies confirm the harm is materialized and significant. Therefore, this is classified as an AI Incident.
Thumbnail Image

「換臉謎片」開庭!小玉出庭狂道歉 5被害人怒視拒原諒 | 社會萬象 | 生活 | NOWnews今日新聞

2022-06-09
NOWnews 今日新聞
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of DeepFake AI technology to create manipulated videos without consent, resulting in serious violations of personal data protection laws and harm to the victims' dignity and privacy. The harm is direct and materialized, as victims have suffered from the unauthorized use of their likeness in pornographic content. This fits the definition of an AI Incident because the AI system's use directly led to violations of human rights and personal harm.
Thumbnail Image

反骨正妹慘變小玉「變臉A片主角」!她出庭手比賠償金額 | 新奇 | NOWnews今日新聞

2022-06-09
NOWnews 今日新聞
Why's our monitor labelling this an incident or hazard?
The article describes an AI Incident because the development and use of AI-based face-swapping technology directly led to violations of personal rights and dignity (a form of harm to individuals and communities). The AI system's misuse caused significant harm to the victims, including public figures, and has resulted in legal proceedings. The continued circulation of these manipulated videos online further compounds the harm. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to realized harm.
Thumbnail Image

小玉「變臉A片」案開庭!黃捷:嚴懲數位性暴力 寸步不讓 | 社會 | Newtalk新聞

2022-06-09
新頭殼 Newtalk
Why's our monitor labelling this an incident or hazard?
The use of Deep Fake technology, an AI system capable of generating realistic synthetic media, to produce and distribute non-consensual explicit videos constitutes a clear AI Incident. The harm includes violations of human rights and digital sexual violence against multiple individuals, which is explicitly described in the article. The AI system's use directly caused these harms, fulfilling the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

小玉換臉謎片案辯論終結 被害人黃捷出庭盼重判 - 新聞

2022-06-09
中時新聞網
Why's our monitor labelling this an incident or hazard?
The event clearly involves the use of an AI system (DeepFaceLab) to produce and distribute harmful deepfake content, which constitutes a violation of personal rights and causes harm to individuals and communities. The AI system's use directly led to the harm described, including emotional distress and reputational damage. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use directly caused violations of rights and harm to persons.
Thumbnail Image

小玉換臉謎片案辯論終結 被害人黃捷出庭盼重判

2022-06-09
HiNet
Why's our monitor labelling this an incident or hazard?
The article explicitly states that the defendants used AI-based face-swapping software (DeepFaceLab) to produce and distribute harmful deepfake pornography without consent, affecting at least 119 victims. This constitutes a clear AI Incident as the AI system's use directly caused violations of personal rights and significant harm to the victims. The harm is realized and ongoing due to the circulation of these videos online, fulfilling the criteria for an AI Incident under violations of human rights and harm to communities.
Thumbnail Image

反骨妹變小玉A片主角!求償金額曝

2022-06-10
HiNet
Why's our monitor labelling this an incident or hazard?
The article describes the use of AI-based face-swapping technology to create non-consensual explicit videos, which directly harms the individuals involved by violating their rights and causing reputational and emotional damage. This fits the definition of an AI Incident because the AI system's use has directly led to violations of human rights and significant harm to individuals. The ongoing legal proceedings and victim compensation claims further confirm the realized harm.
Thumbnail Image

小玉出庭狂道歉 5被害人拒原諒

2022-06-09
HiNet
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of DeepFake AI technology to create manipulated videos that caused harm to many individuals, constituting violations of personal data protection and harm to victims' dignity and privacy. The AI system's use directly led to these harms, fulfilling the criteria for an AI Incident. The legal actions and victim testimonies confirm the harm has materialized, not just potential. Therefore, this event is classified as an AI Incident.
Thumbnail Image

小玉換臉謎片案辯論終結 被害人黃捷出庭盼重判 | 社會 | 中央社 CNA

2022-06-09
Central News Agency
Why's our monitor labelling this an incident or hazard?
The article explicitly states that the defendants used AI-based face-swapping software (DeepFaceLab) to produce and distribute illicit deepfake videos without consent, affecting 119 victims. This constitutes a direct AI Incident as the AI system's use caused violations of personal rights and significant harm to the victims. The harm is realized and ongoing due to the circulation of these videos online, fulfilling the criteria for an AI Incident under violations of human rights and harm to communities.
Thumbnail Image

換臉A片案再開庭! 黃捷強調「絕不和解」小玉回11字閃離現場

2022-06-09
鏡週刊 Mirror Media
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI deepfake technology to create and distribute fake adult videos without consent, causing harm to at least 119 victims. This constitutes a violation of human rights and personal dignity, fulfilling the criteria for an AI Incident. The AI system's use (deepfake AI) directly led to the harm described. The ongoing legal case and admission of guilt confirm the realized harm rather than a potential risk, so it is not merely a hazard or complementary information.
Thumbnail Image

深偽換臉不雅影片案開庭 網紅小玉2度道歉

2022-06-09
公共電視
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system (deepfake technology) to create harmful content that has been distributed online, causing direct harm to individuals' dignity and mental health, which constitutes violations of human rights and harm to communities. The malicious use of AI to produce and spread non-consensual explicit videos is a clear AI Incident as per the definitions. The ongoing legal actions and societal responses further confirm the realized harm caused by the AI system's use.
Thumbnail Image

小玉賣「換臉A片」議員黃捷現身新北地院 怒斥:絕不和解 | 社會 | 三立新聞網 SETN.COM

2022-06-09
三立新聞
Why's our monitor labelling this an incident or hazard?
The article describes the criminal use of Deep Fake AI technology to produce and sell manipulated videos without consent, causing harm to individuals' rights and reputations. The involvement of AI in generating these videos is explicit, and the harm includes violations of personal data protection laws, defamation, and distribution of obscene material. The legal prosecution and victim testimonies confirm that harm has occurred. Therefore, this event qualifies as an AI Incident under the framework, as the AI system's use directly led to violations of rights and harm to individuals.
Thumbnail Image

被害人律師希望重判12年 小玉當庭道歉:有反省、對不起 | 社會 | 三立新聞網 SETN.COM

2022-06-09
三立新聞
Why's our monitor labelling this an incident or hazard?
The use of Deep Fake technology constitutes an AI system as it involves AI-based face-swapping and video manipulation. The event describes the development and use of this AI system to produce and distribute non-consensual explicit content, which is a violation of personal rights and causes harm to the victims. The harm is realized and ongoing, meeting the criteria for an AI Incident. The legal case and victim impact statements further confirm the direct link between the AI system's use and the harm caused.
Thumbnail Image

「換臉A片」小玉出庭 黃捷怒斥不和解 他地院冷回11字 | 社會 | 三立新聞網 SETN.COM

2022-06-09
三立新聞
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of Deep Fake AI technology to produce and sell fake pornographic videos without consent, causing harm to many individuals. This constitutes a violation of rights and harm to communities, meeting the criteria for an AI Incident. The legal actions and victim statements confirm that the harm is realized, not just potential. Therefore, this event is classified as an AI Incident.
Thumbnail Image

合成換臉不雅影片!天價求償金付不起 小玉當庭認罪求輕判-台視新聞網

2022-06-09
台視新聞網
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of face-swapping software, which is an AI system capable of generating synthetic videos by replacing faces. The malicious use of this AI system caused direct harm to the victims through privacy violations, defamation, and psychological trauma. The harm is realized and ongoing, with legal consequences and victim impact described. Hence, it meets the criteria for an AI Incident due to direct harm caused by the AI system's use.
Thumbnail Image

網紅小玉賣「換臉A片」賺千萬 議員黃捷親自出庭:絕不和解!-台視新聞網

2022-06-09
台視新聞網
Why's our monitor labelling this an incident or hazard?
The event describes the production and sale of 'face-swap' adult videos using AI technology, which directly harms the victims by violating their rights and causing digital sexual violence. The involvement of AI in creating these deepfake videos is explicit, and the harm to individuals and communities is realized. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to violations of rights and harm to persons.