AI-Generated Deepfake Images Used to Extort Taiwanese Professors

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Multiple professors at Taiwanese universities were targeted by extortion emails containing AI-generated deepfake explicit images with their faces. The images, created by replacing faces on stock photos, were used to threaten reputational harm unless payment was made. The incidents caused psychological distress and prompted police investigations and institutional warnings.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions the use of Deep Fake AI technology to create fake explicit images used in threatening emails, which have caused harm to the recipients by intimidation and potential violation of privacy and personal rights. The AI system's use in this malicious manner directly leads to harm, fulfilling the criteria for an AI Incident under the framework.[AI generated]
AI principles
Respect of human rightsPrivacy & data governanceSafetyRobustness & digital securityAccountabilityTransparency & explainabilityHuman wellbeing

Industries
Education and trainingDigital security

Affected stakeholders
Workers

Harm types
PsychologicalReputationalHuman or fundamental rightsEconomic/Property

Severity
AI incident

AI system task:
Content generationRecognition/object detection


Articles about this incident or hazard

Thumbnail Image

收到Deep Fake恐嚇信 李忠憲諷:最近教授被詐騙集團認定容易上鉤? - 生活 - 自由時報電子報

2023-03-22
Liberty Times Net
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of Deep Fake AI technology to create fake explicit images used in threatening emails, which have caused harm to the recipients by intimidation and potential violation of privacy and personal rights. The AI system's use in this malicious manner directly leads to harm, fulfilling the criteria for an AI Incident under the framework.
Thumbnail Image

教授照片遭合成勒索 北醫大網站分批加浮水印 - 臺北市 - 自由時報電子報

2023-03-24
Liberty Times Net
Why's our monitor labelling this an incident or hazard?
The article describes an incident where photos of professors were illegally used and AI or algorithmic methods were employed to synthesize inappropriate images, which were then used to extort the individuals. This involves the use of AI systems for malicious purposes, directly causing harm to individuals (violation of rights and psychological harm). The harm is realized, not just potential. The university's response is a complementary action but does not negate the incident classification. Hence, this is an AI Incident.
Thumbnail Image

上千教授 遭深偽不雅照恐嚇 歹徒疑中國人士 - 社會 - 自由時報電子報

2023-03-24
Liberty Times Net
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of deepfake technology, an AI system capable of generating realistic fake images by face-swapping, to create non-consensual explicit photos of professors. These images are then used in extortion attempts, causing direct harm to individuals' reputations and psychological well-being, which falls under violations of human rights and harm to communities. The involvement of AI is clear and central to the incident, and the harm is realized, not just potential. Hence, this event meets the criteria for an AI Incident.
Thumbnail Image

收到變臉勒索性怎麼辦? 警方:快截圖報案 | 聯合新聞網

2023-03-24
UDN
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (deepfake technology) to generate manipulated images for extortion, which directly harms individuals through threats and financial loss. The AI system's use in this malicious context has led to realized harm (extortion, psychological harm), fitting the definition of an AI Incident. The police response and advice are complementary information but the core event is an AI Incident due to the realized harm caused by AI misuse.
Thumbnail Image

變臉慾照詐騙全台教授 北醫大:公開照片將加浮水印 | 聯合新聞網

2023-03-24
UDN
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of deepfake technology, an AI system, to create manipulated images for extortion, causing direct harm to the individuals targeted (professors). This fits the definition of an AI Incident because the AI system's use has directly led to harm (psychological and reputational harm) and violation of privacy rights. The university's mitigation measures are complementary information but do not change the classification of the primary event as an AI Incident.
Thumbnail Image

換臉詐騙滲透大學 鎖定教授恐嚇 | 聯合新聞網

2023-03-24
UDN
Why's our monitor labelling this an incident or hazard?
The event involves the use of deepfake AI technology to create fabricated images of professors, which are then used in threatening emails to extort and intimidate them. The harm is realized as professors experience fear, distress, and potential reputational damage, fulfilling the criteria of harm to persons and violation of rights. The AI system's malicious use directly leads to these harms, making this an AI Incident rather than a hazard or complementary information.
Thumbnail Image

成大、清大、北醫大多名教授遭合成不雅照勒索 研判詐騙信來自中國

2023-03-25
HiNet
Why's our monitor labelling this an incident or hazard?
The event explicitly describes the illegal synthesis of explicit images by replacing faces of professors, which is a known application of AI-based deepfake technology. The synthesized images are used in extortion emails, causing harm to the individuals targeted. The harm includes violation of personal rights, reputational damage, and psychological harm, which fits under violations of human rights and harm to communities. The AI system's use in generating these images is central to the incident. Therefore, this qualifies as an AI Incident.
Thumbnail Image

多所大學教授照片遭合成 勒索信「一圖多用」夾中國用語 | 生活 | 三立新聞網 SETN.COM

2023-03-24
三立新聞
Why's our monitor labelling this an incident or hazard?
The event explicitly describes the illegal synthesis of professors' photos into fake explicit images used for extortion. The synthesis of images strongly implies the use of AI-based generative techniques (deepfake or similar). The harm is realized as professors suffer reputational damage, psychological harm, and violation of their rights. The extortion emails and the use of AI-generated images directly lead to these harms. Therefore, this is an AI Incident due to the direct involvement of AI-generated content causing harm to individuals through extortion and reputational damage.