Generative AI Tools Facilitate Child Sexual Exploitation in Taiwan

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

In Taiwan, generative AI tools such as 'one-click undressing' and deepfake platforms have enabled the creation and spread of sexually exploitative images of minors, with over half of reported online abuse cases involving such content. NGOs and officials are calling for stricter regulation and bans on these AI tools to protect children.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves the use of generative AI systems to create illegal sexual images of minors, which directly causes harm to children and violates legal protections. The AI's role in generating these images is pivotal to the harm described. The article details realized harm (child sexual exploitation and psychological damage) caused by AI misuse, meeting the criteria for an AI Incident under violations of human rights and harm to communities. Therefore, this is classified as an AI Incident.[AI generated]
AI principles
Respect of human rightsSafety

Industries
Media, social platforms, and marketing

Affected stakeholders
Children

Harm types
PsychologicalHuman or fundamental rights

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

上傳生活照「AI一鍵脫衣」非法網站大增 民團籲慎防兒少性剝削新型態 | 聯合新聞網

2026-02-10
UDN
Why's our monitor labelling this an incident or hazard?
The event involves the use of generative AI systems to create illegal sexual images of minors, which directly causes harm to children and violates legal protections. The AI's role in generating these images is pivotal to the harm described. The article details realized harm (child sexual exploitation and psychological damage) caused by AI misuse, meeting the criteria for an AI Incident under violations of human rights and harm to communities. Therefore, this is classified as an AI Incident.
Thumbnail Image

AI脫衣工具助長兒少性剝削 民團籲全面禁止 | 聯合新聞網

2026-02-10
UDN
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems used for generating realistic sexual images of minors and women without their consent, which constitutes a violation of human rights and causes harm to individuals and communities. The AI tools are directly involved in producing and spreading harmful content, leading to realized harm (child sexual exploitation and sexual violence). Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to significant harm.
Thumbnail Image

52%申訴兒少性剝削 應禁止脫衣生成式AI | 聯合新聞網

2026-02-10
UDN
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions generative AI systems being used to produce illegal and harmful content involving child sexual exploitation, which constitutes a violation of human rights and causes significant harm to individuals and communities. The AI systems' use in generating and spreading such content directly leads to realized harm, meeting the criteria for an AI Incident. The discussion of regulatory and educational responses is complementary but secondary to the primary incident of harm caused by AI misuse.
Thumbnail Image

遭勒索遊戲點數、虛擬寶物 色心遭利用⋯男性成視訊裸聊主要被害人 | 聯合新聞網

2026-02-10
UDN
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of generative AI deepfake technology to create synthetic sexual images that are used to extort victims, which constitutes a violation of rights and harm to individuals. The harm is realized and ongoing, with victims being extorted for money or virtual goods. The AI system's development and use have directly led to these harms, fulfilling the criteria for an AI Incident. The involvement is not speculative or potential but actual and causing significant harm, thus not a hazard or complementary information.
Thumbnail Image

裸聊側錄 受害者全是男性 | 聯合新聞網

2026-02-10
UDN
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI deepfake images being involved in sexual exploitation and extortion cases, which are realized harms to individuals. The AI system's use in generating or manipulating images for malicious purposes directly leads to violations of rights and harm to victims. This meets the criteria for an AI Incident as the AI system's use has directly led to harm (sexual exploitation, extortion) and violations of rights. The presence of AI deepfake technology in the harm-causing process is clear and central to the incident described.
Thumbnail Image

一鍵脫衣猖獗...兒少性剝削 AI淪犯罪工具 | 聯合新聞網

2026-02-10
UDN
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (generative AI for face-swapping and image manipulation) being used maliciously to produce and distribute sexual images of minors, which directly causes harm to children (psychological harm and violation of rights). This fits the definition of an AI Incident because the AI's use has directly led to violations of human rights and harm to communities (children and society). The article also mentions legal and policy responses, but the primary focus is on the realized harm caused by AI misuse, not just on responses or potential risks.
Thumbnail Image

58個平台提供AI一鍵脫衣!兒少性剝削破大洞 衛福部回應了

2026-02-10
中時新聞網
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly used to generate sexualized images of minors and women, which directly contributes to child sexual exploitation, a clear violation of human rights and legal protections. The article describes realized harm through the exploitation and abuse cases linked to these AI-generated images. The involvement of AI in producing such harmful content meets the criteria for an AI Incident, as the AI's use has directly led to violations of rights and harm to vulnerable communities. The government's discussion of potential legislation is a response to this incident, not the primary event itself.
Thumbnail Image

AI換臉、一鍵脫衣 民團促立法禁止

2026-02-10
中時新聞網
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems that generate harmful content (AI face swapping and AI nudity generation) used to create child sexual exploitation material, which is a direct violation of human rights and causes significant harm to children and communities. The AI's use in producing and distributing such content constitutes direct harm. The article discusses ongoing harm and calls for legal action, confirming that harm is realized, not just potential. Hence, it meets the criteria for an AI Incident.
Thumbnail Image

AI一鍵脫衣助長兒少性剝削 展翅參與國際連署籲下架工具 - 生活 - 自由時報電子報

2026-02-10
Liberty Times Net
Why's our monitor labelling this an incident or hazard?
The event involves the use and misuse of AI systems (generative AI for image manipulation) that have directly led to significant harm, namely child sexual exploitation and abuse, which is a violation of human rights and causes harm to communities. The AI tools enable the creation of illegal and harmful content, thus directly contributing to the harm described. Therefore, this qualifies as an AI Incident under the framework.
Thumbnail Image

使用AI生成兒少性影像 最重判7年 - 生活 - 自由時報電子報

2026-02-10
Liberty Times Net
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used to generate child sexual abuse images, which constitutes a violation of human rights and legal protections for children. Although no specific incident of harm has been reported yet, the article describes ongoing and potential harms from the use of AI for creating illegal and harmful content. The legal penalties and calls for regulation indicate recognition of the serious harm such AI misuse can cause. Since the article primarily focuses on the current and potential harms from AI-generated child sexual exploitation images and the legal framework addressing them, it qualifies as an AI Incident due to the direct link between AI use and violations of rights and laws protecting children.
Thumbnail Image

生成式AI氾濫!民團憂產製兒少性影像 衛福部:最高7年有期徒刑 - 生活 - 自由時報電子報

2026-02-10
Liberty Times Net
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems capable of generating images, including child sexual abuse images, which are illegal and harmful. While no direct incident of harm has been reported, the widespread availability of such AI tools and the expressed concerns about their misuse indicate a credible risk of harm occurring. Therefore, this situation fits the definition of an AI Hazard, as the development and use of these AI systems could plausibly lead to significant harm, including violations of laws protecting children and causing harm to communities.
Thumbnail Image

AI脫衣工具助長兒少性剝削 民團籲全面禁止 | 生活 | 中央社 CNA

2026-02-10
Central News Agency
Why's our monitor labelling this an incident or hazard?
The event clearly involves AI systems (AI face-swapping and image generation technologies) used to create realistic sexual images of minors and women without consent, which constitutes a violation of human rights and causes harm to communities. The harm is realized and ongoing, as evidenced by the large number of reported cases and the facilitation of child sexual exploitation. The article calls for legal and regulatory responses, indicating the severity and direct impact of the AI system's use. Hence, this qualifies as an AI Incident under the OECD framework.
Thumbnail Image

AI脫衣工具助長兒少性剝削 衛福部納防制條例 | 生活 | 中央社 CNA

2026-02-10
Central News Agency
Why's our monitor labelling this an incident or hazard?
The event involves AI systems capable of generating realistic sexual images of minors without their consent, which directly relates to violations of human rights and child sexual exploitation. While no concrete harm has been reported yet, the plausible risk of such harm occurring due to the use of these AI tools is clear and significant. Therefore, this situation constitutes an AI Hazard, as the development and use of these AI systems could plausibly lead to serious harms covered under the child sexual exploitation prevention laws.
Thumbnail Image

AI基本法納兒少保護!台灣將評估AI應用風險 防範數位性暴力

2026-02-10
工商時報
Why's our monitor labelling this an incident or hazard?
The article centers on risk evaluation, governance, and preventive measures related to AI's potential misuse in generating harmful content like deepfake images involving minors. While it acknowledges existing digital sexual exploitation risks exacerbated by AI, it does not describe a particular AI incident causing realized harm. The emphasis is on assessing AI application risks and implementing protective frameworks, which aligns with the definition of an AI Hazard or Complementary Information. Given the detailed description of ongoing assessments, expert consultations, and policy development, the article primarily provides complementary information about AI risk management and governance rather than reporting a new incident or imminent hazard. Therefore, it is best classified as Complementary Information.
Thumbnail Image

生成式AI助長兒少性剝削 民團籲立法規範 | 兒少性影像 | 大紀元 | 台灣大紀元 | 大紀元

2026-02-10
The Epoch Times
Why's our monitor labelling this an incident or hazard?
The event involves the use and misuse of generative AI systems to produce and distribute child sexual exploitation material, which directly causes harm to children and violates their rights, fitting the definition of an AI Incident. The AI systems' development and use have directly led to significant harm (child sexual exploitation and abuse imagery), and the article reports on this harm as ongoing and significant. Although the government is considering further regulation, the harm is already occurring, so this is not merely a hazard or complementary information. Therefore, the classification is AI Incident.
Thumbnail Image

生成式AI助长儿少性剥削 民团吁立法规范 | 儿少性影像 | 大纪元 | 台湾大纪元 | 大纪元

2026-02-10
The Epoch Times
Why's our monitor labelling this an incident or hazard?
The event involves the use and misuse of generative AI systems to produce and disseminate child sexual exploitation material, which constitutes a violation of human rights and criminal law protections for children. The AI systems' development and use have directly led to significant harm to children and communities by enabling new forms of sexual exploitation and abuse. Therefore, this qualifies as an AI Incident under the OECD framework because the AI system's use has directly led to realized harm (child sexual exploitation). The article also mentions ongoing governance responses, but the primary focus is on the harm caused by AI misuse.
Thumbnail Image

兒少性剝削檢舉破5成!AI「一鍵脫衣」成最大幫兇 台灣展翅:58個平台是亂源 | 科技 | Newtalk新聞

2026-02-10
新頭殼 Newtalk
Why's our monitor labelling this an incident or hazard?
The event involves the use and misuse of AI systems (generative AI for image manipulation) that have directly led to significant harm to children through sexual exploitation and psychological trauma. The presence of AI systems is explicit, and the harm is realized and ongoing, meeting the criteria for an AI Incident. The article also mentions responses but the primary focus is on the harm caused by AI misuse, not just complementary information.
Thumbnail Image

AI脫衣工具助長兒少性剝削 民團籲禁止

2026-02-11
國語日報
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly used to generate realistic sexual images, including those depicting minors, which directly leads to violations of human rights and child sexual exploitation—a serious harm. The article documents actual cases and ongoing harm, not just potential risks. The AI's role is pivotal in enabling the creation and dissemination of such content, lowering the threshold for criminal activity and increasing victim risk. Hence, it meets the criteria for an AI Incident as the AI system's use has directly led to significant harm.
Thumbnail Image

AI助長兒少性剝削 民團籲立法規範| 台灣大紀元

2026-02-10
大紀元時報 - 台灣(The Epoch Times - Taiwan)
Why's our monitor labelling this an incident or hazard?
The event involves the use and misuse of generative AI systems to create and distribute sexually exploitative content involving minors, which directly causes harm to children and violates their rights. The AI systems' role is pivotal in enabling new forms of child sexual exploitation crimes, fulfilling the criteria for an AI Incident under the OECD framework. The article describes realized harm (child sexual exploitation facilitated by AI), not just potential harm, and thus qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

生成式AI助長兒少性剝削 民團:近2500檢舉案件

2026-02-10
公共電視
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI technology to generate inappropriate and exploitative images of minors, which directly harms children by violating their rights and exposing them to sexual exploitation. The large number of reported cases confirms that harm has occurred. The involvement of AI in generating these images is central to the harm described. Therefore, this event meets the criteria for an AI Incident due to direct harm to human rights and child protection.