Taiwan Strengthens Laws Against AI-Generated Deepfake Sexual Violence

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

In response to rising cases of AI-generated deepfake sexual images and videos causing harm, Taiwan's Executive Yuan approved amendments to four laws. The new measures criminalize the creation and distribution of non-consensual deepfake sexual content, increase penalties, and require internet platforms to promptly remove illegal material, aiming to better protect victims.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves the use of AI systems (deepfake technology) to produce and spread non-consensual sexual images, which directly harms individuals' rights and privacy. The article reports on legal responses to these harms, indicating that such AI-enabled misuse has already caused significant harm. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to violations of human rights and harm to individuals. The legislative amendments aim to address and punish these harms.[AI generated]
AI principles
Respect of human rightsPrivacy & data governanceSafetyAccountabilityRobustness & digital securityHuman wellbeing

Industries
Media, social platforms, and marketingDigital securityGovernment, security, and defence

Affected stakeholders
Other

Harm types
PsychologicalReputationalHuman or fundamental rights

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

政院修法嚴懲性暴力// 偽造影像散布營利 可關7年 - 社會 - 自由時報電子報

2022-03-10
Liberty Times Net
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (deepfake technology) to produce and spread non-consensual sexual images, which directly harms individuals' rights and privacy. The article reports on legal responses to these harms, indicating that such AI-enabled misuse has already caused significant harm. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to violations of human rights and harm to individuals. The legislative amendments aim to address and punish these harms.
Thumbnail Image

嚴懲性暴力!政院通過4修法 散布不實性影像營利最重判7年 - 政治 - 自由時報電子報

2022-03-10
Liberty Times Net
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of deepfake technology (an AI system) to create false sexual images and the legislative response to criminalize such acts. The harms described include violations of privacy and sexual violence, which are direct harms to individuals. The AI system's use in producing harmful content is central to the event, making this an AI Incident under the framework, as the AI system's use has directly led to harm and legal consequences.
Thumbnail Image

防制數位性暴力 政院翻修四法| 台灣大紀元

2022-03-10
大紀元時報 - 台灣(The Epoch Times - Taiwan)
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of deepfake technology (an AI system) to create and distribute non-consensual sexual images, which has caused real harm to individuals (privacy violations, sexual violence). The legislative response targets these harms by criminalizing such AI-enabled acts. Since the AI system's use has directly led to violations of rights and harm to individuals, this qualifies as an AI Incident under the framework. The article focuses on the harms caused and the legal measures to address them, not merely on potential risks or general AI developments.
Thumbnail Image

防制性私密影像犯罪 刑法擬增「妨害性隱私及不實性影像罪章」 - 政治 - 自由時報電子報

2022-03-10
Liberty Times Net
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI technology (deepfake) in creating false sexual images that harm individuals' privacy and reputation, which constitutes a violation of human rights and personal dignity. The harms described are realized and significant, involving direct injury to individuals' privacy and reputation. The legislative response is a governance measure addressing an existing AI Incident type harm. Therefore, the event relates to an AI Incident as it involves the use and misuse of AI systems causing direct harm to people.
Thumbnail Image

遏止換臉犯罪!政院通過4修法 偽造散布營利最重判7年 - 自由電子報影音頻道

2022-03-10
自由時報
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions deepfake technology, which is an AI system used to create false sexual images. The legislative amendments aim to punish the production and distribution of such AI-generated content, especially when it causes harm such as privacy violations and sexual violence. Since the harms described (privacy violations, sexual violence, and distribution of false sexual images) have already occurred and the law is responding to these harms, this constitutes an AI Incident. The AI system's misuse has directly led to violations of human rights and harm to individuals, meeting the criteria for an AI Incident under the OECD framework.
Thumbnail Image

遏止換臉犯罪!政院通過4修法 偽造散布營利最重判7年 - 自由電子報影音頻道

2022-03-10
自由時報
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in the form of deepfake technology used to create false sexual images. The legislation addresses harms caused by the use of such AI systems, including violations of privacy and sexual violence, which are harms to individuals and their rights. However, the article focuses on the passing of laws to regulate and punish these harms rather than describing a specific incident of harm or a direct AI system malfunction. Therefore, this is a societal and governance response to an existing AI-related harm issue, making it Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

換臉A片散布牟利 最重判7年 | 聯合新聞網:最懂你的新聞網站

2022-03-10
UDN
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of Deepfake technology, which is an AI system capable of generating synthetic images by swapping faces. The harms described include non-consensual sexual image creation and distribution, which violate privacy and cause significant harm to victims. The legislation aims to prevent and punish these harms. Although the article mainly discusses the legal response, the underlying issue is an AI Incident because the AI system's use has directly led to harm. The article's main focus is on the governance response, but the harms caused by the AI system are real and ongoing, qualifying this as an AI Incident rather than merely Complementary Information or an AI Hazard.
Thumbnail Image

防制性暴力犯罪 政院擬10日通過四項修法 | 聯合新聞網:最懂你的新聞網站

2022-03-09
UDN
Why's our monitor labelling this an incident or hazard?
The event involves the use and misuse of AI systems (deepfake technology) to create and distribute non-consensual sexual images, which constitutes a violation of privacy and personal rights, a form of harm under the AI Incident definition. The article discusses concrete harms that have occurred (e.g., non-consensual deepfake videos) and the legislative response to prevent and punish such harms. Since the AI system's misuse has directly led to harm and the article focuses on legal measures to address this, it qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

性暴力犯罪防制4法 數位女力:非專法執法處理難度高 | 聯合新聞網:最懂你的新聞網站

2022-03-10
UDN
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of "deepfake technology" (an AI system) in the context of sexual violence crimes, specifically the creation and distribution of non-consensual sexual images. This involves the use of AI-generated content causing harm to individuals (violation of rights and harm to persons). The legislative amendments aim to increase penalties and improve victim protection mechanisms related to these AI-enabled harms. Since the article describes ongoing harm caused by AI-generated deepfake content and legal responses to it, this qualifies as an AI Incident. The focus is on the harm caused by AI misuse and the legal framework to address it, not merely on potential future harm or general AI developments, so it is not a hazard or complementary information.
Thumbnail Image

遏Deepfake偽造性影片 行政院拍板《刑法》修正 | 聯合新聞網:最懂你的新聞網站

2022-03-10
UDN
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI technology (deepfake) to produce false sexual images, which directly harms individuals' privacy and reputation, constituting violations of rights under applicable law. The article describes the legal response to these harms, indicating that such AI-enabled harms have occurred and are being addressed through criminal penalties. Therefore, this qualifies as an AI Incident because the development and use of AI systems (deepfake technology) have directly led to violations of rights and harm to individuals. The article focuses on the legal measures taken in response to these harms, not merely on the potential risk or general AI developments, so it is not Complementary Information or an AI Hazard.
Thumbnail Image

政院修性暴力4法 散播影像可罰60萬 | 聯合新聞網:最懂你的新聞網站

2022-03-10
UDN
Why's our monitor labelling this an incident or hazard?
The article explicitly references deepfake technology, a form of AI system, used to create false sexual images that harm individuals' privacy and dignity. The legislation targets the use and distribution of such AI-generated content, indicating that harm has occurred or is occurring. The involvement of AI in generating harmful content and the legal measures to address it meet the criteria for an AI Incident, as the AI system's use has directly led to violations of rights and harm to communities. The article is not merely about potential harm or general AI policy but about concrete legal action addressing realized harms from AI misuse.
Thumbnail Image

深偽不雅片營利 可判七年 | 稅務法規 | 金融 | 經濟日報

2022-03-10
Udnemoney聯合理財網
Why's our monitor labelling this an incident or hazard?
The article focuses on the legal framework being updated to address harms caused by AI-generated deepfake sexual images. While it does not describe a particular AI Incident (no specific case of harm is detailed as occurring now), it highlights the potential for serious harm through misuse of AI deepfake technology, including violations of privacy and sexual rights. The legislative changes aim to prevent and punish such harms. Therefore, this event is best classified as Complementary Information, as it provides governance and societal response context to AI-related harms rather than reporting a new AI Incident or AI Hazard.
Thumbnail Image

遏Deepfake偽造性影片 行政院拍板《刑法》修正 | 金融脈動 | 金融 | 經濟日報

2022-03-10
Udnemoney聯合理財網
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI technology (deepfake) in the creation of false sexual images, which directly harms individuals' privacy and reputation, constituting violations of human rights and personal dignity. The article describes the legal response to these harms by criminalizing such acts and setting penalties, indicating that harm from AI-generated deepfake content has been recognized and is being addressed. Therefore, this is an AI Incident as it concerns realized harm caused by AI systems (deepfake technology) and the legal measures taken to address it.
Thumbnail Image

政院通過性暴力犯罪防制四法 營利深偽影像最重可處7年 | 政治快訊 | 要聞 | NOWnews今日新聞

2022-03-10
NOWnews 今日新聞
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI Deepfake technology to create non-consensual sexual images, which constitutes an AI system's involvement in causing harm through misuse. The legislation targets harms such as violations of privacy, sexual violence, and exploitation facilitated by AI-generated deepfake images. Since these harms are realized and the laws respond to actual incidents involving AI misuse, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to violations of rights and harm to individuals.
Thumbnail Image

防制性暴力犯罪 台政院10日拟通过四项修法 - 大纪元

2022-03-09
The Epoch Times
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions Deepfake technology, an AI system that generates synthetic images or videos, being used to create false sexual content that harms individuals. The legislative response targets the use and misuse of such AI systems leading to violations of privacy and sexual violence, which are harms to individuals and communities. Since the harms are occurring and the legislation is a response to these harms, this qualifies as an AI Incident. The article focuses on the harms caused by AI-generated content and the legal measures to address them, not just potential future risks or general AI news.
Thumbnail Image

防制性暴力犯罪 台政院10日擬通過四項修法 - 大紀元

2022-03-09
The Epoch Times
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions Deepfake technology, an AI system capable of generating realistic but false sexual images, which has been misused to harm individuals. The legislative proposals target the use and distribution of such AI-generated content, which has already caused harm to victims (e.g., non-consensual sexual images). Therefore, the event involves the use of AI systems leading to violations of privacy and sexual violence-related harms, fitting the definition of an AI Incident. The article focuses on the legal response to existing harms caused by AI misuse rather than potential future risks or general AI developments, so it is not a hazard or complementary information.
Thumbnail Image

防制数位性暴力 台政院翻修四法 - 大纪元

2022-03-10
The Epoch Times
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of deepfake technology (an AI system) to create and distribute non-consensual sexual images, which has caused direct harm to individuals (privacy violations, sexual exploitation) and communities. The legislative reforms are a response to these harms, indicating that the AI system's misuse has already led to significant harm. Hence, this is an AI Incident as the AI system's use has directly led to violations of rights and harm to persons.
Thumbnail Image

防制數位性暴力 台政院翻修四法 - 大紀元

2022-03-10
The Epoch Times
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of deepfake technology (an AI system) in creating non-consensual sexual images, which constitutes digital sexual violence harming individuals' privacy and dignity. The legislative reforms address these harms by criminalizing such acts and setting penalties. Since the harms have occurred and the AI system's role (deepfake technology) is pivotal in causing these harms, this qualifies as an AI Incident under the framework, as it involves violations of rights and harm to individuals through AI-enabled misuse.
Thumbnail Image

政院通過修法 製造或散布不實性影像等犯罪行為明確入刑 | 政治 | | Newtalk新聞

2022-03-10
新頭殼 Newtalk
Why's our monitor labelling this an incident or hazard?
The article discusses a legal reform aimed at addressing harms caused by AI-enabled deepfake technology used to create and spread false sexual images, which can violate privacy and personality rights. While the use of AI in deepfake creation is central to the issue, the article does not report a specific AI incident or hazard event but rather the passing of laws to prevent such harms. This fits the definition of Complementary Information, as it details governance responses to AI-related risks and harms without describing a new incident or hazard itself.
Thumbnail Image

防制性暴力犯罪 政院擬10日通過四項修法 | 政治 | 中央社 CNA

2022-03-09
Central News Agency
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions Deepfake technology, an AI system used to create manipulated sexual images without consent, which has led to real harms such as privacy violations and sexual violence. The legislative proposals aim to criminalize such AI-enabled misuse and impose obligations on internet providers to mitigate these harms. Since the article describes ongoing or past harms caused by AI-generated Deepfake content and legal responses to them, this qualifies as Complementary Information rather than a new AI Incident or AI Hazard. The focus is on governance and societal response to existing AI-related harms rather than reporting a new incident or a potential hazard.
Thumbnail Image

政院翻修防制性暴力等法 婦團盼專法增防治效率 | 生活 | 中央社 CNA

2022-03-10
Central News Agency
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI technology (deepfake) in creating non-consensual explicit videos, which constitutes digital sexual violence and a violation of personal rights. The article describes ongoing harm caused by AI misuse (deepfake videos) and legislative efforts to address it. Since the harm is occurring and the AI system's misuse is directly linked to violations of rights and personal harm, this qualifies as an AI Incident. The article focuses on the legislative response but the underlying issue is realized harm from AI misuse.
Thumbnail Image

政院翻修4法防制性暴力 蘇貞昌:至關重要 | 政治 | 中央社 CNA

2022-03-10
Central News Agency
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of technology for deepfake creation as a target of the new laws, which involves AI systems capable of generating synthetic sexual images. The legislative effort aims to prevent harms such as privacy invasion and personality rights violations caused by such AI-generated content. Since the article focuses on legal reforms to address existing and potential harms caused by AI deepfake technology, it relates to AI systems' use and misuse leading to violations of human rights and privacy. However, the article does not describe a specific incident of harm occurring but rather the legal response to prevent such harms. Therefore, this is best classified as Complementary Information, providing governance and societal response context to AI-related harms.
Thumbnail Image

政院翻修防制性暴力等4法 不實性影像營利最高判7年 | 政治 | 中央社 CNA

2022-03-10
Central News Agency
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of deepfake technology (an AI system) to create false sexual images, which have caused social harm and privacy violations. The legislative response is to criminalize such acts, indicating that the harms are realized and significant. Since the AI system's use has directly led to violations of rights and harm to individuals, this qualifies as an AI Incident. The article focuses on the harms caused by AI misuse and the legal measures to address them, not merely on potential future risks or general AI developments.
Thumbnail Image

杜絕性暴力犯罪 政院翻修四大法律增訂罪責

2022-03-10
Rti 中央廣播電臺
Why's our monitor labelling this an incident or hazard?
The article explicitly references the use of AI-based deepfake technology to create false sexual images that harm individuals' privacy and reputation. The legislative response is to criminalize these acts and enhance protections for victims, indicating that such harms have been realized and are significant. The AI system's use in generating deepfake content directly leads to violations of privacy and personal rights, fitting the definition of an AI Incident. The focus is on addressing actual harms caused by AI misuse rather than potential future risks or general AI developments, so it is not an AI Hazard or Complementary Information. It is not unrelated because AI deepfake technology is central to the issue.
Thumbnail Image

防制性暴力犯罪四法 蘇揆:保障性別弱勢群體身心安全

2022-03-10
Rti 中央廣播電臺
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of deepfake technology (an AI system) in creating harmful sexual content, which has led to social harm. The legislative response aims to prevent and penalize such misuse. However, the article focuses on the legal and policy measures rather than reporting a specific incident of harm caused by AI. Therefore, it does not describe a realized AI Incident but rather addresses the potential and ongoing misuse of AI technology in sexual violence, making it primarily a governance and societal response to AI-related harms.
Thumbnail Image

政院翻修防制性暴力犯罪四法 增訂不實性影像罪章

2022-03-10
Rti 中央廣播電臺
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of deepfake technology (an AI system) to create non-consensual sexual images, which is a direct cause of harm to individuals' privacy and dignity, thus a violation of human rights. The legislative amendments aim to criminalize and penalize such harms caused by AI-generated content. Since the harm (violation of rights through non-consensual deepfake sexual images) is occurring and recognized, this qualifies as an AI Incident. The article focuses on the legal response to an existing AI-related harm rather than a potential future risk or a general update, so it is not a hazard or complementary information.
Thumbnail Image

政院通過「性暴力防制4法」修正案 販賣換臉性影片最重判7年

2022-03-10
公共電視
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of deepfake AI technology to create non-consensual sexual videos, which has harmed multiple victims, including women in politics. This constitutes a violation of human rights and personal dignity, fitting the definition of harm (c). The legislative amendments are a response to this realized harm caused by AI misuse. Hence, the event is an AI Incident due to the direct link between AI system misuse and harm.
Thumbnail Image

防制數位性暴力 法務部提深偽犯罪、散布性影像等罪則

2022-03-09
公共電視
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of deepfake technology (an AI system) in creating non-consensual sexual content, which has harmed hundreds of individuals. The Ministry of Justice is proposing new laws to criminalize such AI-enabled offenses. However, the article primarily discusses the legislative and institutional measures being taken in response to these harms rather than describing a new incident or hazard itself. Therefore, this is Complementary Information that provides context and updates on governance and societal responses to AI-related harms, rather than reporting a new AI Incident or AI Hazard.
Thumbnail Image

偽造影像入法!行政院:散布不實性影像營利最重判7年 | 政治 | 三立新聞網 SETN.COM

2022-03-10
三立新聞
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (AI deepfake technology) to create non-consensual fake sexual images, which has caused harm to individuals' privacy and dignity, thus constituting a violation of human rights. The legislation aims to address and penalize such harms. Since the article describes actual harm caused by the use of AI deepfake technology and the legal response to it, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to violations of rights and harm to individuals.
Thumbnail Image

政院通過性暴力犯罪防制4法 AI換臉性影像盈利可關7年 | 聯合新聞網:最懂你的新聞網站

2022-03-10
UDN
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI deepfake technology to create non-consensual sexual images, which has directly led to harm to individuals' privacy, dignity, and mental health, fulfilling the criteria for an AI Incident. The legal response aims to address these harms and prevent further incidents. The AI system's misuse is central to the harm described, and the article details realized harm rather than potential harm, so it is not merely a hazard or complementary information. Hence, the classification as AI Incident is appropriate.