Taiwanese YouTuber Prosecuted for Deepfake AI Pornography Scandal Involving 119 Victims

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

YouTuber 'Xiao Yu' and an accomplice used deepfake AI technology to create and sell non-consensual explicit videos featuring the faces of 119 public figures, causing severe and lasting harm. The pair profited over NT$13 million before being prosecuted for privacy violations, defamation, and sexual exploitation, prompting calls for stricter AI-related laws.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article references the use of AI technology for creating deepfake videos, which implies the involvement of an AI system. The creation and use of deepfake content for profit can be considered a violation of rights, particularly intellectual property rights and potentially privacy rights, which aligns with harm category (c). Since the deepfake videos were allegedly produced and used, this constitutes realized harm. Therefore, this event qualifies as an AI Incident due to the direct involvement of AI in producing harmful content and the resulting controversy and harm.[AI generated]
AI principles
Privacy & data governanceRespect of human rightsSafetyAccountabilityTransparency & explainabilityRobustness & digital securityHuman wellbeing

Industries
Media, social platforms, and marketingDigital security

Affected stakeholders
General public

Harm types
Human or fundamental rightsPsychologicalReputational

Severity
AI incident

Business function:
Sales

AI system task:
Content generationRecognition/object detection


Articles about this incident or hazard

Thumbnail Image

受小玉牽連消失5個月!笑笑宣布復出「明天開台」:知道的都告訴大家 | ETtoday星光雲 | ETtoday新聞雲

2022-03-24
ETtoday星光雲
Why's our monitor labelling this an incident or hazard?
The AI system (deepfake technology) was involved in a prior incident causing harm (illegal deepfake videos), which is an AI Incident. However, this article primarily reports on Xiaoxiao's return to streaming after being indirectly affected by that incident. There is no new harm or plausible future harm described here. Therefore, this article is best classified as Complementary Information, providing an update related to a previous AI Incident but not describing a new incident or hazard.
Thumbnail Image

笑笑直播洩「小玉最新近況」! 捲換臉風波復出:當然還是朋友 | ETtoday星光雲 | ETtoday新聞雲

2022-03-25
ETtoday星光雲
Why's our monitor labelling this an incident or hazard?
The article references the use of AI technology for creating deepfake videos, which implies the involvement of an AI system. The creation and use of deepfake content for profit can be considered a violation of rights, particularly intellectual property rights and potentially privacy rights, which aligns with harm category (c). Since the deepfake videos were allegedly produced and used, this constitutes realized harm. Therefore, this event qualifies as an AI Incident due to the direct involvement of AI in producing harmful content and the resulting controversy and harm.
Thumbnail Image

受小玉換臉A片牽連 笑笑沉寂5個月復出「把知道的都告訴大家」 | 噓!星聞

2022-03-25
聯合新聞網 udn.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI face-swapping technology to produce illegal adult videos, leading to police action and investigation. This use of AI directly caused harm by violating laws and potentially infringing on individuals' rights. The harm has materialized, and the AI system's misuse is central to the incident. Therefore, this qualifies as an AI Incident under the framework.
Thumbnail Image

遭轟換臉謎片總帳房?笑笑5個月後親揭小玉消失近況 | 娛樂 | NOWnews今日新聞

2022-03-26
NOWnews 今日新聞
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI technology to produce non-consensual face-swap pornographic videos, which is a direct violation of personal rights and privacy, thus constituting an AI Incident under the category of violations of human rights and harm to communities. The involvement of AI in creating harmful content and the resulting criminal consequences confirm this classification. The discussion about the financial transactions and the disappearance of one individual are related to the incident but do not change the classification.
Thumbnail Image

受小玉換臉牽連!「笑笑」消失5個月復出:知道的都告訴大家 | 網紅 | | Newtalk新聞

2022-03-25
新頭殼 Newtalk
Why's our monitor labelling this an incident or hazard?
The use of Deepfake AI software to create fake adult videos without consent is a clear violation of rights and illegal activity, constituting an AI Incident due to harm caused by the AI system's use. The article focuses on the ongoing investigation and the collaborator's return, which is complementary information about the incident's aftermath rather than a new incident or hazard. Therefore, the event is best classified as Complementary Information related to a prior AI Incident involving Deepfake misuse.
Thumbnail Image

國安局關切!笑笑回憶小玉換臉事件接到詢問電話:有做蔡英文的嗎? | 網紅 | | Newtalk新聞

2022-03-28
新頭殼 Newtalk
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI deepfake technology to create non-consensual explicit videos, which constitutes a violation of rights and harm to individuals. The involvement of law enforcement and national security agencies indicates the seriousness of the harm. The AI system's use directly led to harm, fulfilling the criteria for an AI Incident. The misinformation about political figure deepfakes is noted but does not change the classification as the primary harm from the deepfake videos is established.
Thumbnail Image

涉小玉靠換臉謎片撈千萬 網紅笑笑神隱5個月後現身回應 - 娛樂

2022-03-25
中時新聞網
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of DeepFake AI software to create and distribute manipulated pornographic content without consent, causing violations of individual rights and defamation, which are harms under the AI Incident definition. The AI system's use directly led to these harms, and legal actions have been taken against the perpetrators. The involvement of AI in the creation and dissemination of harmful content meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

笑笑衰捲換臉謎片事件 開播被問:還會和小玉做朋友?親吐真相 - 娛樂

2022-03-25
中時新聞網
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system (DeepFake software) to create manipulated videos without consent, which constitutes a violation of personal rights and privacy, a form of harm to individuals. The illegal distribution and profit from such content further confirm harm has occurred. The involvement of AI in the creation and dissemination of these videos directly led to legal prosecution and social harm, meeting the criteria for an AI Incident.
Thumbnail Image

受小玉換臉牽連 笑笑消失5個月復出

2022-03-25
HiNet
Why's our monitor labelling this an incident or hazard?
The article describes a case where Deepfake AI technology was used to create illegal content, causing harm through unlawful profit and potential rights violations. This constitutes an AI Incident because the AI system's use directly led to legal and reputational harm. The involvement of "笑笑" is indirect but related to the AI Incident. Therefore, this event is classified as an AI Incident.
Thumbnail Image

2022-03-26
HiNet
Why's our monitor labelling this an incident or hazard?
The article describes a case where AI technology was used to create face-swap deepfake videos ('換臉謎片') without consent, causing harm to many victims. The use of AI for malicious deepfake content constitutes a violation of personal rights and privacy, which falls under violations of human rights or breach of applicable law. The criminal investigation and the harm caused to victims confirm that the AI system's use directly led to harm. Therefore, this event meets the criteria for an AI Incident.
Thumbnail Image

小玉偷臉謎片被起訴!笑笑神隱5個月復出:告訴大家知道的 | 娛樂星聞 | 三立新聞網 SETN.COM

2022-03-24
三立新聞
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of DeepFake AI technology to create and distribute non-consensual pornographic videos, which directly harms the victims' privacy and reputation. This fits the definition of an AI Incident because the AI system's use has directly led to violations of rights and harm to individuals. The legal actions and public responses confirm the harm has materialized. Therefore, this event is classified as an AI Incident.
Thumbnail Image

快訊/賣換臉A片賺進1338萬 小玉、男助理被起訴 | 社會 | 三立新聞網 SETN.COM

2022-03-16
三立新聞
Why's our monitor labelling this an incident or hazard?
The use of Deepfake technology, an AI system capable of generating realistic face swaps, directly caused harm by producing and distributing non-consensual explicit content, violating privacy and defamation laws. This meets the criteria for an AI Incident as the AI system's use led to violations of human rights and legal obligations protecting individuals' rights. The prosecution confirms the harm has materialized and is legally recognized.
Thumbnail Image

雞排妹高嘉瑜都受害!賣換臉謎片 百萬YouTuber小玉起訴 - 社會 - 自由時報電子報

2022-03-16
Liberty Times Net
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI Deepfake technology to produce and distribute pornographic videos without consent, involving well-known public figures. This use of AI has directly caused harm through defamation, violation of personal data protection laws, and emotional distress to the victims. The involvement of AI in creating manipulated content that leads to these harms fits the definition of an AI Incident, as the AI system's use has directly led to violations of human rights and personal harm.
Thumbnail Image

黃捷︰換臉A片還在流傳 已提告 - 社會 - 自由時報電子報

2022-03-16
Liberty Times Net
Why's our monitor labelling this an incident or hazard?
The article describes the criminal use of AI-powered face-swapping (deepfake) technology to produce and distribute non-consensual explicit videos, which is a direct violation of victims' rights and causes significant harm. The AI system's misuse is central to the harm, and legal actions have been taken. Therefore, this event qualifies as an AI Incident due to realized harm involving an AI system's misuse leading to violations of rights and harm to individuals.
Thumbnail Image

雞排妹高嘉瑜都受害!賣換臉謎片 百萬YouTuber小玉起訴 - 新北市 - 自由時報電子報

2022-03-16
Liberty Times Net
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI Deepfake technology to create and sell pornographic videos without consent, which constitutes a violation of personal rights and defamation, both recognized harms under the AI Incident definition. The AI system's use directly led to these harms, and the legal prosecution confirms the materialization of harm. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

換臉謎片案起訴/法界︰數罪併罰 網紅小玉最重判30年 - 社會 - 自由時報電子報

2022-03-16
Liberty Times Net
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (deepfake technology) in a criminal act that has directly caused harm to individuals' rights and reputations, constituting violations of personal data protection and defamation laws. The harm is realized and the legal system is responding to these harms. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use directly led to violations of human rights and harm to communities.
Thumbnail Image

小玉賣換臉謎片起訴 檢:對被害者形成網路性暴力霸凌 - 新北市 - 自由時報電子報

2022-03-16
Liberty Times Net
Why's our monitor labelling this an incident or hazard?
The article explicitly states the use of AI Deepfake technology (an AI system) to create harmful content without consent, leading to direct harm to individuals' rights and dignity. The harms include violations of privacy, sexual harassment, and defamation, which fall under violations of human rights and harm to communities. The AI system's use is central to the incident, fulfilling the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

高嘉瑜、雞排妹等119人受害 賣換臉謎片 百萬YT小玉起訴 - 社會 - 自由時報電子報

2022-03-16
Liberty Times Net
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the use of deepfake AI technology to create harmful manipulated videos without consent, affecting 119 victims including politicians and celebrities. This misuse of AI has directly caused harm to individuals' rights and reputations, fulfilling the criteria for an AI Incident under violations of human rights and personal data protection. The legal actions and societal impact confirm the realized harm caused by the AI system's use.
Thumbnail Image

「小玉」換臉大賺1338萬餘元被起訴 受害者高達119人 | 聯合新聞網:最懂你的新聞網站

2022-03-16
UDN
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system (face swapping via DeepFace) to create harmful deepfake content. The AI system's use directly led to violations of personal rights and harm to individuals, fulfilling the criteria for an AI Incident. The malicious use of AI to produce and distribute non-consensual pornographic videos constitutes a clear breach of fundamental rights and causes significant harm to the victims. Therefore, this event is classified as an AI Incident.
Thumbnail Image

賣換臉謎片賺千萬 小玉3罪起訴 | 聯合新聞網:最懂你的新聞網站

2022-03-16
UDN
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI deepfake technology (an AI system) to create and distribute harmful content without consent, leading to violations of personal privacy, defamation, and psychological harm to at least 119 victims. The AI system's use directly caused these harms, fulfilling the criteria for an AI Incident under violations of human rights and harm to communities. The legal prosecution and legislative response further confirm the recognition of harm caused by AI misuse.
Thumbnail Image

119人受害!網紅小玉販合成色情片海撈千萬 遭檢方起訴 | 生活 | NOWnews今日新聞

2022-03-16
NOWnews 今日新聞
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system (DeepFake) to create synthetic pornographic content without consent, which has caused direct harm to the victims' personal rights, reputation, and privacy. This meets the criteria for an AI Incident as the AI system's use directly led to violations of human rights and harm to individuals. The legal prosecution further confirms the recognition of harm caused. Therefore, this event is classified as an AI Incident.
Thumbnail Image

小玉「換臉」遭起訴 黃捷籲勿下載影片:別讓受害者膽戰心驚地生活

2022-03-16
鏡週刊 Mirror Media
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of Deepfake technology, an AI system that synthesizes faces to create fake videos. The misuse of this AI system has directly caused harm to individuals by violating their rights and causing psychological distress, fitting the definition of an AI Incident. The event involves the development and use of the AI system for malicious purposes, leading to violations of human rights and harm to the victims. Therefore, this qualifies as an AI Incident.
Thumbnail Image

台卖换脸不雅影片捞千万 网红"小玉"遭起诉 - 大纪元

2022-03-16
The Epoch Times
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of Deep Fake AI technology to create non-consensual explicit videos, which were distributed and sold for profit. This use of AI directly caused harm to the victims through defamation, violation of privacy, and emotional distress, fulfilling the criteria for harm to persons and violation of rights. The involvement of AI in the malicious creation and dissemination of these videos is central to the incident. Therefore, this event is classified as an AI Incident.
Thumbnail Image

台賣換臉不雅影片撈千萬 網紅「小玉」遭起訴 - 大紀元

2022-03-16
The Epoch Times
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system (DeepFace deepfake technology) to produce and distribute harmful content without consent, leading to violations of privacy, defamation, and moral harm to individuals. The AI system's use directly caused harm to the victims, fulfilling the criteria for an AI Incident under violations of human rights and harm to communities. The malicious and profit-driven use of AI-generated deepfake videos constitutes a clear AI Incident rather than a hazard or complementary information.
Thumbnail Image

網紅小玉換臉大賺千萬今遭起訴 黃捷:別再讓受害女性「膽顫心驚的生活」 | 社會 | | Newtalk新聞

2022-03-16
新頭殼 Newtalk
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system (deepfake technology) to produce harmful content without consent, causing violations of personal rights and privacy, which are recognized harms under the AI Incident definition. The AI system's use directly led to these harms, including defamation and breach of personal data protection laws. Therefore, this qualifies as an AI Incident.
Thumbnail Image

小玉「換臉海撈上千萬」今遭起訴! 受害者高達119人 | 網紅 | | Newtalk新聞

2022-03-16
新頭殼 Newtalk
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI deepfake technology was used to create fake explicit videos without consent, causing harm to individuals' rights and reputations. The harm is direct and significant, involving violations of privacy and defamation, which fall under breaches of fundamental and personal rights. The involvement of AI in generating these videos is central to the incident, and the legal action taken confirms the harm has occurred. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

蔡英文也受害!YouTuber小玉靠換臉謎片海撈千萬 今被起訴 - 社會

2022-03-16
中時新聞網
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of DeepFake AI technology to produce manipulated pornographic content without consent, causing harm to individuals' reputations and privacy, which are violations of human rights and personal dignity. The AI system's use directly led to these harms and illegal profits, fulfilling the criteria for an AI Incident. The involvement of AI in the creation and distribution of these harmful videos is clear and central to the event.
Thumbnail Image

小玉換臉遭起訴 黃捷驚爆:不肖網友還在散播影片 - 政治

2022-03-16
中時新聞網
Why's our monitor labelling this an incident or hazard?
The use of DeepFake AI technology to create and distribute non-consensual explicit videos constitutes a violation of personal rights and privacy, which falls under violations of human rights and applicable laws protecting fundamental rights. The harm is realized and ongoing, as victims suffer from the unauthorized use of their likeness and the spread of these videos. Therefore, this event qualifies as an AI Incident due to the direct involvement of an AI system in causing harm to individuals.
Thumbnail Image

網紅小玉濫用換臉技術合成名人謎片遭起訴 | 社會 | 中央社 CNA

2022-03-16
Central News Agency
Why's our monitor labelling this an incident or hazard?
The article explicitly states that the defendants used AI and deepfake technology to synthesize faces onto pornographic videos, which were then distributed online for profit. This use of AI directly caused harm to individuals through privacy violations, defamation, and sexual exploitation, fulfilling the criteria for an AI Incident under the OECD framework. The involvement of AI in the creation and dissemination of harmful content, and the resulting legal charges, confirm the direct link to harm.
Thumbnail Image

深偽換臉製119位女性不雅影片 小玉與助理遭起訴

2022-03-16
公共電視
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system (deepfake technology) to create harmful content (non-consensual explicit videos) that directly violates personal data protection laws and harms the victims' rights and privacy. The harm is realized and significant, affecting over 100 individuals. The involvement of AI in producing the harmful content and the resulting legal action classify this as an AI Incident under violations of human rights and breach of applicable law protecting fundamental rights.
Thumbnail Image

賣名人換臉A片 小玉起訴面臨天價和解金、30年有期徒刑 | 社會 | 三立新聞網 SETN.COM

2022-03-17
三立新聞
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of a face-swapping AI system to create non-consensual synthetic pornographic videos, which constitutes a violation of personal rights and privacy (human rights violations). The harm is realized and significant, affecting at least 119 victims. The AI system's development and use directly led to these harms, and legal prosecution is underway. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use directly caused violations of human rights and harm to individuals.
Thumbnail Image

販售119人換臉不雅影片撈千萬 網紅「小玉」遭起訴| 台灣大紀元

2022-03-16
大紀元時報 - 台灣(The Epoch Times - Taiwan)
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of Deep Fake AI technology to produce and distribute manipulated explicit videos without consent, causing direct harm to individuals' rights and reputations. This constitutes a violation of human rights and legal protections, fulfilling the criteria for an AI Incident. The AI system's development and use directly led to these harms, and the legal prosecution confirms the recognition of these harms as a result of AI misuse.
Thumbnail Image

遭Deepfake換臉不雅片「終身傷害」 高嘉瑜盼修法帶來警惕-台視新聞網

2022-03-16
台視新聞網
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the use of Deepfake AI technology to produce harmful fake explicit videos of individuals without their consent, which has caused 'lifelong harm' to the victims. This constitutes a violation of personal rights and privacy, fitting the definition of an AI Incident under violations of human rights or breach of applicable law protecting fundamental rights. The AI system's misuse directly led to these harms, and the legal prosecution confirms the incident's materialization. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

高嘉瑜、雞排妹遭換臉製作謎片 百萬YouTuber「小玉」遭起訴-台視新聞網

2022-03-16
台視新聞網
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI deepfake technology to produce and distribute harmful content without consent, causing violations of personal rights and legal protections. The AI system's use directly caused harm to individuals' reputations and privacy, fulfilling the criteria for an AI Incident under violations of human rights and legal obligations. The involvement of AI in the creation of these manipulated videos is central to the harm described.