Taiwanese Influencer Jailed and Fined for Deepfake Pornography Scandal

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Taiwanese influencer Zhu Yuchen ('Xiao Yu') and his assistant used AI deepfake technology to superimpose faces of celebrities and public figures onto pornographic videos without consent, profiting over NT$13 million. The scheme harmed 119 victims, resulting in criminal convictions and multiple civil compensation orders, highlighting significant privacy and reputational violations caused by AI misuse.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event explicitly involves the use of deepfake AI technology to manipulate videos by swapping faces without consent, resulting in serious harm to the victims' reputation, privacy, and personal rights. The harm is realized and legally recognized, with court rulings ordering compensation. The AI system's use is central to the harm, fulfilling the criteria for an AI Incident under violations of human rights and harm to individuals. Therefore, this is classified as an AI Incident.[AI generated]
AI principles
Privacy & data governanceRespect of human rightsTransparency & explainabilitySafetyAccountabilityRobustness & digital securityHuman wellbeing

Industries
Media, social platforms, and marketingGovernment, security, and defence

Affected stakeholders
General public

Harm types
Human or fundamental rightsReputationalPsychological

Severity
AI incident

Business function:
Other

AI system task:
Content generationRecognition/object detection


Articles about this incident or hazard

Thumbnail Image

網紅小玉夥同助理製作換臉謎片 再被判賠100萬元 | 聯合新聞網

2023-02-18
UDN
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of deepfake AI technology to manipulate videos by swapping faces without consent, resulting in serious harm to the victims' reputation, privacy, and personal rights. The harm is realized and legally recognized, with court rulings ordering compensation. The AI system's use is central to the harm, fulfilling the criteria for an AI Incident under violations of human rights and harm to individuals. Therefore, this is classified as an AI Incident.
Thumbnail Image

小玉賣「盜臉私密片」遭求償 黃捷、YouTuber獲賠百萬

2023-02-18
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
The event clearly involves the use of an AI system (Deepfake technology) in a harmful way, causing direct violations of human rights and personal dignity. The AI system's use led to the creation and distribution of illicit content without consent, causing significant harm to individuals. This fits the definition of an AI Incident because the AI system's use directly led to violations of rights and harm to individuals, with legal consequences and compensation awarded to victims.
Thumbnail Image

網紅小玉夥同助理製作換臉謎片 再被判賠100萬元

2023-02-18
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI deepfake technology to manipulate images and videos, which directly led to harm in the form of violations of personal rights including reputation, privacy, and dignity. The harm is realized and legally recognized, fulfilling the criteria for an AI Incident under violations of human rights and harm to individuals. Therefore, this is classified as an AI Incident.
Thumbnail Image

小玉「換臉謎片」海撈千萬!再判賠網紅100萬

2023-02-19
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of deepfake AI technology to produce harmful content without consent, leading to violations of privacy rights and defamation, which are breaches of fundamental rights. The harm is realized and significant, affecting many individuals, and legal rulings have been made to address these harms. Therefore, it meets the criteria for an AI Incident as the AI system's use directly caused harm to persons and their rights.
Thumbnail Image

「小玉」搞變臉謎片 民事再判賠正妹網紅100萬元 - 社會 - 自由時報電子報

2023-02-18
Liberty Times Net
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI Deepfake technology to create non-consensual manipulated videos, which directly harmed over a hundred victims by infringing on their privacy and reputation. The harm is materialized and legally recognized through criminal and civil court rulings. This fits the definition of an AI Incident because the AI system's use directly led to violations of human rights and harm to individuals. Therefore, the event is classified as an AI Incident.
Thumbnail Image

小玉AI換臉謎片!遭判刑5年半 再判賠正妹網紅100萬元 | 社會萬象 | 生活 | NOWnews今日新聞

2023-02-19
NOWnews 今日新聞
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system (Deepfake technology) to manipulate images and videos, causing direct harm to individuals through privacy violations and defamation. The harm is realized and legally recognized, with criminal and civil penalties imposed. Therefore, this qualifies as an AI Incident due to the direct and significant harm caused by the AI system's use.
Thumbnail Image

小玉AI換臉謎片!判賠網紅100萬元

2023-02-19
HiNet
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of deepfake AI technology to create manipulated videos without consent, causing harm to individuals' privacy and reputation. The harm is realized and legally recognized, with court rulings ordering compensation to victims. The AI system's use is central to the harm, fulfilling the criteria for an AI Incident due to violations of rights and harm to individuals and communities.
Thumbnail Image

小玉換臉才賠50萬!奎丁怒轟羞辱人

2023-03-14
HiNet
Why's our monitor labelling this an incident or hazard?
The article describes a case where an individual used AI deepfake technology to swap faces of many people onto other bodies without consent, causing harm to over a hundred victims including public figures. This constitutes a violation of personal rights and privacy, which falls under violations of human rights or breach of applicable law protecting fundamental rights. The legal proceedings and compensation awarded confirm that harm has materialized. Therefore, this event qualifies as an AI Incident due to the direct harm caused by the misuse of an AI system.
Thumbnail Image

小玉賣換臉謎片判賠50萬 受害女網紅暴怒:這錢根本羞辱人 - 自由娛樂

2023-03-13
自由時報電子報
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of deepfake technology, an AI system that manipulates facial images to create realistic but fake videos. The misuse of this AI system caused direct harm to the victims by violating their privacy and personal rights, constituting a breach of applicable laws protecting fundamental rights. The court's ruling confirms the harm has materialized and the AI system's role is pivotal in causing this harm. Therefore, this qualifies as an AI Incident under the OECD framework.
Thumbnail Image

賣換臉謎片 百萬YouTuber「小玉」再判賠5女各50萬元 - 新北市 - 自由時報電子報

2023-03-13
Liberty Times Net
Why's our monitor labelling this an incident or hazard?
The article explicitly states the use of deepfake (an AI system) to manipulate images and create harmful content without consent, leading to violations of personal rights and reputational harm to multiple individuals. This constitutes direct harm caused by the AI system's use, fulfilling the criteria for an AI Incident under violations of human rights and harm to individuals. The legal rulings and compensation further confirm the materialized harm.
Thumbnail Image

小玉換臉被判賠50萬 奎丁「看新聞才知」暴怒:根本羞辱人! | ETtoday星光雲 | ETtoday新聞雲

2023-03-13
ETtoday星光雲
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI face-swapping technology (an AI system) to manipulate images of individuals without consent, resulting in psychological harm and violation of rights. The harm has already occurred, and legal consequences have been imposed. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's use.
Thumbnail Image

網紅小玉與助理製換臉謎片 法院再判賠5女各50萬 | 聯合新聞網

2023-03-14
UDN
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system (deepfake technology) to create manipulated videos that caused harm to individuals by violating their rights and causing psychological trauma. This constitutes a violation of human rights and personal dignity, fitting the definition of an AI Incident. The harm is realized and legally recognized, with court rulings confirming the damages caused by the AI-enabled misuse.
Thumbnail Image

小玉換臉案再賠5女各50萬!奎丁「看新聞才知道」轟:根本是羞辱人 | 噓!星聞

2023-03-14
聯合新聞網 udn.com
Why's our monitor labelling this an incident or hazard?
The article describes a case where an individual used AI-based face-swapping technology to create and distribute non-consensual explicit videos of multiple victims, causing direct harm to those individuals. The harm includes violations of personal rights and dignity, which fits the definition of harm to persons and communities. The AI system's use is central to the incident, and the legal consequences confirm the harm has materialized. Hence, this qualifies as an AI Incident.
Thumbnail Image

119人受害!網紅小玉販合成A片撈千萬 5女子各獲賠50萬 | 社會萬象 | 生活 | NOWnews今日新聞

2023-03-13
NOWnews 今日新聞
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI-based DeepFake technology to create synthetic pornographic videos without consent, which constitutes a violation of human rights and applicable laws protecting privacy and dignity. The harm to the victims is direct and significant, including reputational damage and emotional distress. The involvement of the AI system in producing these videos is central to the incident. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to violations of rights and harm to individuals.
Thumbnail Image

小玉換臉才賠50萬!奎丁「看新聞才知道」轟:根本羞辱人 | 娛樂 | NOWnews今日新聞

2023-03-13
NOWnews 今日新聞
Why's our monitor labelling this an incident or hazard?
The article describes a case where an AI system (DeepFake) was used maliciously to create non-consensual face swaps causing harm to many individuals. This constitutes a violation of rights and personal harm, meeting the criteria for an AI Incident. The legal rulings and compensation orders confirm that harm has occurred and is recognized by the judicial system. Therefore, this event is classified as an AI Incident due to the direct harm caused by the AI system's misuse.
Thumbnail Image

小玉換臉判賠50萬!受害者奎丁氣炸:根本羞辱人 | 網紅 | Newtalk新聞

2023-03-14
新頭殼 Newtalk
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (deepfake AI technology) to create manipulated videos that harmed the victims' reputations and caused emotional distress, which is a violation of personal rights and a form of harm to individuals. The harm has already occurred, and legal actions have been taken, confirming the direct link between the AI system's use and the harm. Therefore, this qualifies as an AI Incident under the framework.
Thumbnail Image

小玉又慘賠!換臉5名正妹網紅成A片女主角 每人獲賠50萬 - 社會

2023-03-13
中時新聞網
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system (Deepfake technology) to create manipulated videos without consent, which constitutes a violation of personal data protection laws and infringes on the victims' rights and dignity. The harm is realized and significant, including psychological trauma and reputational damage. Therefore, this qualifies as an AI Incident because the AI system's use directly caused harm to individuals and breaches of legal protections.
Thumbnail Image

小玉判賠50萬 奎丁氣炸曝他收入金額「根本是羞辱人」 - 娛樂

2023-03-13
中時新聞網
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI deepfake technology to create fake videos without consent, which is a misuse of AI leading to violations of personal data protection laws and harm to the victims' rights and dignity. The legal rulings and compensation confirm that harm has occurred. Therefore, this qualifies as an AI Incident because the AI system's use directly led to violations of rights and harm to individuals.
Thumbnail Image

小玉販合成A片撈金!5女各獲賠50萬

2023-03-13
HiNet
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI-based DeepFake technology to create non-consensual synthetic pornography, which constitutes a violation of personal rights and privacy, a breach of applicable laws, and causes significant harm to the individuals involved. The legal actions and damages awarded demonstrate that the AI system's misuse has directly led to these harms. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly caused violations of rights and harm to individuals.
Thumbnail Image

小玉換臉判賠50萬!受害者奎丁氣炸:根本羞辱人

2023-03-14
HiNet
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of a deepfake AI system to manipulate images and videos without consent, resulting in harm to the victims through violation of their rights and emotional distress. The legal rulings and compensation orders confirm that harm has materialized due to the AI system's misuse. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use directly led to violations of rights and harm to individuals.
Thumbnail Image

小玉換臉判決不同調 奎丁看新聞才知道!怒轟:根本羞辱人

2023-03-15
HiNet
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI face-swapping technology to create synthetic videos for profit, which harmed many individuals by violating their rights and causing reputational damage. This constitutes a violation of human rights and personal rights (c) under the AI Incident definition. The harm has already occurred, and legal rulings have been made, confirming the direct link between the AI system's use and the harm. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

網紅小玉與助理製換臉謎片 法院再判賠5女各50萬 | 社會 | 中央社 CNA

2023-03-14
Central News Agency
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system (deepfake technology) to generate harmful content (pornographic videos) without consent, which has directly led to violations of personal rights and caused psychological harm to multiple individuals. This fits the definition of an AI Incident as the AI system's use has directly led to harm to persons and violations of rights under applicable law.
Thumbnail Image

小玉換臉判決不同調 奎丁看新聞才知道!怒轟:根本羞辱人

2023-03-14
鏡週刊 Mirror Media
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of face-swapping technology, which is a form of AI system (deepfake AI) used to generate manipulated videos. The harm is realized as multiple victims have suffered from the unauthorized use of their likeness, leading to legal claims and court rulings awarding damages. This constitutes a violation of personal rights and privacy, a breach of legal protections, thus meeting the criteria for an AI Incident. The event is not merely a potential risk but involves actual harm and legal consequences, so it is not an AI Hazard or Complementary Information. It is directly related to AI system use causing harm.
Thumbnail Image

換臉黃片賠慘了!小玉再判賠「5正妹各50萬」 悲曝只能回宜蘭老家打雜 | 社會 | 三立新聞網 SETN.COM

2023-03-13
三立新聞
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of DeepFake technology, an AI system for face swapping, to create non-consensual pornographic videos involving multiple victims. This misuse has caused direct harm to the victims' reputation and personal rights, leading to legal actions and financial penalties. Therefore, this qualifies as an AI Incident because the AI system's use directly caused violations of rights and harm to individuals.
Thumbnail Image

小玉換臉再賠5女各50萬!奎丁「看新聞才知道」氣炸:根本是羞辱人 | 娛樂星聞 | 三立新聞網 SETN.COM

2023-03-13
三立新聞
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of DeepFake technology, an AI system for face-swapping, which was used maliciously to create non-consensual pornographic videos involving identifiable women. This constitutes a violation of human rights, specifically the victims' rights to their image and reputation, and has caused psychological harm. The legal ruling and compensation orders further confirm that harm has materialized. Therefore, this event qualifies as an AI Incident due to the direct harm caused by the AI system's misuse.