Taiwanese Police Officer Used AI to Create Non-Consensual Explicit Images of Female Victims

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A police officer in Kaohsiung, Taiwan, secretly photographed at least six women during police procedures and used AI technology to generate non-consensual explicit images by face-swapping. The incident led to legal action, disciplinary measures, and public outrage over the misuse of AI for privacy violations.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event explicitly mentions the use of AI to synthesize nude images of victims without consent, which is a direct violation of their rights and privacy. The AI system's misuse here caused clear harm to individuals, fulfilling the criteria for an AI Incident. The involvement of AI in generating harmful content that infringes on fundamental rights and the ongoing legal investigation confirm this classification.[AI generated]
AI principles
Privacy & data governanceRespect of human rightsAccountabilitySafety

Industries
Government, security, and defence

Affected stakeholders
Women

Harm types
Human or fundamental rightsPsychological

Severity
AI incident

AI system task:
Content generationRecognition/object detection


Articles about this incident or hazard

Thumbnail Image

報案變主角!高雄男警偷拍妙齡女製「AI裸照」借廁所也遭殃 記1大過調職

2025-12-13
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI to synthesize nude images of victims without consent, which is a direct violation of their rights and privacy. The AI system's misuse here caused clear harm to individuals, fulfilling the criteria for an AI Incident. The involvement of AI in generating harmful content that infringes on fundamental rights and the ongoing legal investigation confirm this classification.
Thumbnail Image

高雄男警遭控偷拍報案人「合成裸女」!超慘下場曝光

2025-12-13
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
The article explicitly states that the police officer used AI technology to synthesize nude images of victims from photos he secretly took. This use of AI directly led to harm by violating the victims' privacy and personal rights, which is a breach of fundamental rights protected by law. The harm is realized and ongoing, as multiple victims have been affected. Therefore, this qualifies as an AI Incident due to the direct use of AI in causing violations of human rights and personal harm.
Thumbnail Image

做筆錄竟被偷拍!高雄年輕員警遭起底AI合成裸女 記1大過調職

2025-12-13
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI technology to generate synthetic nude images (AI deepfake or face-swapping technology) without consent, which is a direct violation of privacy and human rights. The AI system's use directly led to harm to multiple victims, fulfilling the criteria for an AI Incident under violations of human rights and harm to communities. The disciplinary and legal responses confirm the recognition of harm caused. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

警所內「偷拍報案人」!AI製6女不雅照 涉案警大頭照曝

2025-12-13
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI technology to generate non-consensual explicit images by face-swapping onto photos secretly taken by the officer. This constitutes a violation of human rights and privacy, fulfilling the criteria for harm under the AI Incident definition (violation of rights). The AI system's use directly led to harm to individuals, making this an AI Incident rather than a hazard or complementary information. The involvement of AI is clear and pivotal to the harm caused.
Thumbnail Image

偷拍報案女子並用AI合成裸照 高雄噁警遭記大過調職

2025-12-13
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI to synthesize nude images from illegally obtained photos, which directly harms the victims by violating their privacy and personal rights. The AI system's use here is malicious and leads to a clear violation of human rights and applicable laws. The harm is realized and ongoing, not merely potential. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

偷拍6女性報案人「AI合成裸女」⋯高雄年輕噁警身份曝光!記1大過調職

2025-12-13
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI to synthesize nude images of women who were secretly filmed by a police officer, indicating the involvement of an AI system in generating harmful content. The harm is direct and significant, involving violations of privacy and human rights. The misuse of AI in this context has caused real harm to multiple victims, fulfilling the criteria for an AI Incident. The disciplinary and legal responses further confirm the recognition of harm caused by the AI system's misuse.
Thumbnail Image

扯!到警局報案竟成惡夢? 女子遭警員偷拍還被AI合成性影像

2025-12-13
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
An AI system was used to synthesize sexual images without consent, which constitutes a violation of personal rights and privacy, falling under harm category (c) - violations of human rights or breach of applicable law protecting fundamental rights. The AI system's use directly led to harm to the victim. Therefore, this qualifies as an AI Incident.
Thumbnail Image

女網友控訴「報案遭警偷拍AI合成裸女」:不只我1人!疑釣出同案被害人

2025-12-13
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate synthetic nude images (AI-generated sexual content) from images illicitly captured by a police officer. This constitutes a violation of privacy and potentially other rights, which falls under violations of human rights or breach of obligations under applicable law. The event describes actual harm to victims, not just potential harm, and the AI system's role is pivotal in creating the harmful content. Therefore, this qualifies as an AI Incident.
Thumbnail Image

偷拍報案女子並用AI合成裸照 高雄噁警遭記大過調職 - 社會 - 自由時報電子報

2025-12-13
Liberty Times Net
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI to synthesize nude images from illicitly taken photos, which directly harms the victim by violating her privacy and personal rights. This constitutes a breach of applicable laws protecting fundamental and personal rights, fulfilling the criteria for an AI Incident. The AI system's use here is central to the harm caused, not merely incidental or potential, and the harm has already occurred. Therefore, this is classified as an AI Incident.
Thumbnail Image

至少5人受害!遭「AI換臉」成裸女 高雄員警被記過調職 | 聯合新聞網

2025-12-13
UDN
Why's our monitor labelling this an incident or hazard?
The use of AI face-swapping technology to create non-consensual explicit images directly caused harm to the victims, violating their rights and privacy. The AI system's misuse is central to the harm, fulfilling the criteria for an AI Incident under violations of human rights and breach of legal protections. The event describes realized harm, not just potential risk, and involves the use of AI in a harmful manner.
Thumbnail Image

變態男警密錄器偷拍女性 多人受害被換臉合成AI裸女 | 聯合新聞網

2025-12-13
UDN
Why's our monitor labelling this an incident or hazard?
The event explicitly describes the use of AI for face swapping to create synthetic nude images without consent, which is a violation of personal rights and privacy. The AI system's use directly caused harm to multiple victims, fulfilling the criteria for an AI Incident under violations of human rights and harm to communities. The involvement of AI in generating the harmful content and the ongoing legal investigation confirm the direct link to harm.
Thumbnail Image

高雄警涉偷拍報案女子合成不雅照 記1大過調職 | 社會 | Newtalk新聞

2025-12-13
新頭殼 Newtalk
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI (artificial intelligence) for face-swapping to create non-consensual explicit images, which is a clear example of AI system use causing harm. The harm includes violations of personal rights and privacy, which fall under violations of human rights or breach of applicable law protecting fundamental rights. The involvement of AI in the creation of manipulated images that harmed at least five victims confirms the direct link between AI use and realized harm. Hence, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

女報案遭偷拍「合成裸女」 變態高雄男警記1大過調職

2025-12-13
中時新聞網
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI to generate synthetic nude images of women without their consent, which is a clear violation of human rights and personal data protection laws. The AI system's misuse directly caused harm to the victims, fulfilling the criteria for an AI Incident under violations of human rights and breach of applicable law. The administrative punishment and ongoing investigation further confirm the seriousness of the harm caused.
Thumbnail Image

警局淪犯罪現場!女報案竟遭男警偷拍「AI換臉裸照」:不只1人受害

2025-12-13
中時新聞網
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI to synthesize nude images from photos secretly taken by a police officer without consent. This constitutes a violation of human rights and privacy, fulfilling the criteria for harm under the AI Incident definition (violation of rights and harm to individuals). The AI system's development and use in this context directly led to the harm. Hence, it is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

高雄警涉偷拍報案女子合成不雅照 記1大過調職

2025-12-13
Central News Agency
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI for face-swapping to create non-consensual explicit images, which is a misuse of AI technology leading to harm to individuals' rights and privacy. This fits the definition of an AI Incident because the AI system's use directly led to violations of human rights and breaches of applicable laws protecting personal data and privacy. The harm is realized and involves multiple victims, making it a clear AI Incident rather than a hazard or complementary information.
Thumbnail Image

高雄警偷拍報案女「AI合成裸照」!她收傳票才知受害 三民一分局出手了

2025-12-13
蕃新聞
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI technology to synthesize nude images from illicitly obtained photos, which directly harms the victims by violating their privacy and personal rights. The AI system's use here is malicious and has led to realized harm, including emotional distress and legal violations. The involvement of AI in generating harmful synthetic content that breaches fundamental rights fits the definition of an AI Incident. The police response and ongoing investigation further confirm the seriousness of the harm caused.
Thumbnail Image

女網友控訴「報案遭警偷拍AI合成裸女」:不只我1人!疑釣出同案被害人 | 社會 | 三立新聞網 SETN.COM

2025-12-13
三立新聞
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI to generate synthetic nude images from real photos taken without consent, which is a direct misuse of AI technology causing harm to individuals' rights and privacy. The harm is realized and ongoing, as victims have been identified and are seeking legal recourse. The AI system's role is pivotal in creating the harmful synthetic content, fulfilling the criteria for an AI Incident under violations of human rights and privacy.
Thumbnail Image

警所內「偷拍報案人」!AI製6女不雅照  涉案警大頭照曝│TVBS新聞網

2025-12-13
TVBS
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI technology to create non-consensual explicit images (deepfake-like content) of female victims, which is a direct violation of their rights and privacy. The AI system's use here is malicious and has caused realized harm to multiple individuals. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's use in generating harmful content violating human rights.
Thumbnail Image

偷拍報案女「AI合成不雅照」!她收傳票才知受害 警記大過調職-台視新聞網

2025-12-13
台視新聞網
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to create synthetic explicit images (AI face-swapping) without consent, which directly harms the victims by violating their rights and privacy. This is a clear case where the AI system's use has directly led to harm (violation of rights and personal dignity). Therefore, this qualifies as an AI Incident under the framework, as the AI system's misuse caused direct harm to individuals.
Thumbnail Image

高雄男警「偷拍6女報案人」被抓包!他竟用「AI合成裸女」超慘下場曝光 - 民視新聞網

2025-12-13
民視新聞網
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI to generate synthetic nude images from photos secretly taken by a police officer, which is a misuse of AI technology leading to violations of privacy and personal rights. The harm is realized and affects multiple victims, fulfilling the criteria for an AI Incident. The AI system's role is pivotal in transforming the illicit photos into harmful synthetic content, directly contributing to the harm. This is not merely a potential risk or a complementary update but a concrete case of AI misuse causing harm.
Thumbnail Image

警界爆變態醜聞! 年輕噁警偷拍報案女「AI換臉裸體」 被害女性怒嗆高層輕放

2025-12-13
鏡新聞
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI technology to generate synthetic nude images from secretly filmed photos, which constitutes a violation of human rights and privacy. The AI system's use directly led to harm to multiple women, fulfilling the criteria for an AI Incident. The involvement of AI in creating harmful content and the resulting legal and social consequences confirm this classification.
Thumbnail Image

噁警偷拍報案女還「AI合成裸照」!多人慘受害 處分曝光│TVBS新聞網

2025-12-13
TVBS
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI to generate synthetic nude images without consent, which is a clear violation of human rights and privacy. The harm has already occurred to multiple victims, and the AI system's role is pivotal in creating the harmful content. Therefore, this qualifies as an AI Incident under the framework, specifically under violations of human rights or breach of legal protections.
Thumbnail Image

做筆錄竟被偷拍!高雄年輕員警遭起底AI合成裸女 記1大過調職

2025-12-13
鏡新聞
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI to generate synthetic nude images (AI face-swapping or deepfake technology) from photos taken during police record-taking. This constitutes a direct violation of human rights and privacy, fulfilling the criteria for an AI Incident under the framework. The AI system's misuse has directly led to harm to individuals' rights and privacy, and the police officer's actions have been officially sanctioned. Therefore, this is classified as an AI Incident.
Thumbnail Image

扯!到警局報案竟成惡夢? 女子遭警員偷拍還被AI合成性影像

2025-12-13
鏡新聞
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to create synthetic sexual images of a woman without her consent, which is a direct violation of her rights and privacy. The AI system's misuse has led to harm to the victim, fulfilling the criteria for an AI Incident under violations of human rights and harm to individuals. The event is not merely a potential risk but an actual occurrence causing harm, and the AI's role is pivotal in the harm caused. Hence, it is classified as an AI Incident.
Thumbnail Image

太噁!女子報案、借廁所都被偷拍 AI換臉情色片男警遭法辦|壹蘋新聞網

2025-12-13
壹蘋新聞網
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI face-swapping technology to create non-consensual sexual deepfake videos, which is a direct violation of the victims' rights and privacy. The AI system's use here is central to the harm caused, as it enabled the creation of fake explicit content without consent. The harm is realized and ongoing, with victims receiving subpoenas and planning civil lawsuits. This fits the definition of an AI Incident because the AI system's use has directly led to violations of human rights and harm to individuals and communities.
Thumbnail Image

離譜至極!高雄警一天連爆酒駕、偷拍醜聞 警紀徹底崩盤

2025-12-14
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI technology to synthesize non-consensual explicit images from illicitly obtained photos, which is a direct violation of personal data protection laws and harms individuals' rights. The AI system's misuse is central to the incident, causing clear harm to the victims. The drunk driving case, while serious, does not involve AI. Hence, the AI-related part of the event is an AI Incident due to realized harm from AI misuse.
Thumbnail Image

高市警局2分局連爆風紀案 酒駕與偷拍疑雲重創警譽 - OwlNews

2025-12-14
OwlNews
Why's our monitor labelling this an incident or hazard?
The second case explicitly involves the use of AI technology to synthesize inappropriate images, which directly harms individuals' rights and privacy, fulfilling the criteria for an AI Incident under violations of human rights and applicable law. The first case, while serious, does not involve AI and thus is not relevant to AI harm classification. Therefore, the overall event includes an AI Incident due to the AI-enabled image synthesis misuse.