High School Students Use AI Deepfake Technology to Create and Distribute Sexual Images, Nearly 20 Victims in Taichung

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Two male high school students in Taichung, Taiwan, used AI deepfake technology to create and distribute non-consensual sexual images of nearly 20 female classmates. The incident caused significant psychological harm and privacy violations. Authorities and schools have launched investigations, and university admission for one perpetrator is under review.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event explicitly involves the use of AI technology (deepfake) to produce and spread harmful manipulated images, resulting in realized harm to individuals (psychological distress and violation of rights). This meets the criteria for an AI Incident because the AI system's use directly led to harm to persons and violations of rights. The involvement of AI is clear and central to the harm caused, and the incident is ongoing with active investigation and response.[AI generated]
AI principles
Privacy & data governanceRespect of human rights

Industries
Media, social platforms, and marketing

Affected stakeholders
Children

Harm types
PsychologicalHuman or fundamental rights

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

20女學生受害!台中驚傳某私中深偽性影像案 涉案男學生繁星錄取大學 | 聯合新聞網

2026-05-02
UDN
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI technology (deepfake) to produce and spread harmful manipulated images, resulting in realized harm to individuals (psychological distress and violation of rights). This meets the criteria for an AI Incident because the AI system's use directly led to harm to persons and violations of rights. The involvement of AI is clear and central to the harm caused, and the incident is ongoing with active investigation and response.
Thumbnail Image

台中私校男學生散布深偽性影像 近20女同學受害 校方發聲

2026-05-02
中時新聞網
Why's our monitor labelling this an incident or hazard?
The event explicitly involves deepfake technology, which is an AI system used to create synthetic sexual images without consent. The distribution of these images has directly led to psychological harm and distress to the victims, fulfilling the criteria for an AI Incident under the definitions provided. The AI system's use in generating harmful content and its subsequent dissemination constitute a direct cause of harm to individuals, including violations of privacy and mental health impacts. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

台中私中爆深偽性影像事件 涉案學生恐被取消繁星推薦錄取資格 - 生活 - 自由時報電子報

2026-05-02
Liberty Times Net
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the creation and distribution of deepfake sexual images, which are AI-generated synthetic media. The harm caused includes violations of personal rights and potential reputational and psychological harm to the victims. The involvement of AI (deepfake technology) directly led to these harms. Therefore, this qualifies as an AI Incident under the definition of causing violations of human rights or breach of obligations intended to protect fundamental rights.
Thumbnail Image

遭AI深偽變造不雅照怎麼辦?專家:先別崩潰、立刻做這件事 - 臺中市 - 自由時報電子報

2026-05-02
Liberty Times Net
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (deepfake technology) to create manipulated sexual images, which have been distributed and caused harm to individuals. This constitutes a violation of human rights and causes harm to communities, fitting the definition of an AI Incident. The article reports on actual harm occurring due to the AI system's use, not just potential harm or general information, so it is classified as an AI Incident.
Thumbnail Image

台中私中生 傳散佈深偽性影像案 近20人受害 涉案生繁星錄取大學 - 生活 - 自由時報電子報

2026-05-02
Liberty Times Net
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the distribution of deepfake sexual images, which are AI-generated or AI-manipulated content. The harm is realized as nearly 20 victims have been affected by the spread of these images, constituting injury to personal dignity and privacy, which falls under violations of human rights. The AI system's use in creating these images is central to the harm. The article also describes ongoing investigations and potential consequences for the perpetrators, confirming the incident's seriousness. Hence, this is an AI Incident.
Thumbnail Image

中市校園深偽性影像流傳》並非自己的錯 被害者4步驟保權益 - 生活 - 自由時報電子報

2026-05-02
Liberty Times Net
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (deepfake technology) to create and distribute manipulated sexual images without consent, directly causing harm to individuals' privacy and rights. This fits the definition of an AI Incident because the AI system's use has directly led to violations of human rights and harm to individuals. The article describes realized harm and legal implications, not just potential risks, so it is not merely a hazard or complementary information.
Thumbnail Image

兩高中生傳深偽性影像 近20女生受害

2026-05-02
on.cc東網
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of deepfake technology, which is an AI system generating synthetic images. The malicious use of this AI system to create and distribute harmful sexual deepfake images directly leads to violations of rights and harm to the victims. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to harm to persons and communities.
Thumbnail Image

台中私中爆「小玉換臉」翻版!學生散布深偽性影像 受害者近20人

2026-05-02
mnews.tw
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI deepfake technology (an AI system) to create harmful content that has been distributed, causing direct harm to individuals (violation of rights and harm to persons). This meets the criteria for an AI Incident because the AI system's use has directly led to harm (violation of rights and harm to individuals). The involvement of AI deepfake technology in producing and spreading non-consensual sexual images is a clear case of harm caused by AI misuse.
Thumbnail Image

「深偽」造假性影像還外流!涉案高中生繁星錄取資格掀議 學校啟動調查檢警也要辦了

2026-05-02
mnews.tw
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI deepfake (face-swapping) technology to produce false sexual images, which were then disseminated online, harming approximately 20 female students, many of whom are minors. This constitutes a violation of human rights and causes harm to individuals and the community. The AI system's use is central to the harm, fulfilling the criteria for an AI Incident. The involvement of authorities and ongoing investigations further confirm the seriousness and realized harm of the event.
Thumbnail Image

散布多名女同學「不雅深偽影像」繁星竟上榜 校方:資格待調查結果-台視新聞網

2026-05-02
台視新聞網
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI software to create deepfake images, which is an AI system. The malicious use of this AI system directly led to harm to the victims' mental health and personal rights, fulfilling the criteria for an AI Incident. The involvement of AI in producing harmful content and the resulting psychological harm to nearly 20 victims clearly meets the definition of an AI Incident under harm to persons and violation of rights.
Thumbnail Image

台中20名高中女遭散布深偽性影像 涉案男繁星上大學|壹蘋新聞網

2026-05-03
壹蘋新聞網
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of deepfake technology, which is an AI system for generating manipulated images. The malicious creation and distribution of these deepfake sexual images have directly caused harm to the victims' privacy, mental health, and safety, fulfilling the criteria for an AI Incident. The harm is realized and ongoing, with investigations and societal impacts noted. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

高三版小玉!台中某私中爆深偽不雅照「20名女學生受害」 涉案男仍繁星上大學

2026-05-03
mnews.tw
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of deepfake technology, an AI system capable of generating synthetic images, to create non-consensual explicit photos of female students. The distribution of these images caused direct harm to at least 20 victims, including minors, which qualifies as harm to persons and communities. The involvement of AI in generating the harmful content and the resulting violation of rights and emotional harm meets the criteria for an AI Incident. The event is not merely a potential risk but a realized harm scenario involving AI misuse.