Incheon Deepfake Porn Scandal Involving University Alumni

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

In Incheon, police arrested 15 suspects, including a 24-year-old graduate student, for using AI deepfake technology to superimpose female university alumni's faces onto explicit nude images. The manipulated content was distributed via Telegram groups, resulting in non-consensual sexual content and gross violations of privacy and human rights.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions the use of AI-based deepfake technology to create fake explicit images of women without their consent and distribute them in online chat rooms. This use of AI directly led to violations of human rights and sexual violence laws, fulfilling the criteria for an AI Incident under the framework. The harm is realized and ongoing, involving multiple victims and legal action against perpetrators.[AI generated]
AI principles
Privacy & data governanceRespect of human rightsSafetyHuman wellbeingAccountabilityTransparency & explainabilityRobustness & digital security

Industries
Education and trainingDigital securityMedia, social platforms, and marketing

Affected stakeholders
General public

Harm types
Human or fundamental rightsPsychologicalReputational

Severity
AI incident

AI system task:
Content generationRecognition/object detection


Articles about this incident or hazard

Thumbnail Image

또 '딥페이크' 이용한 지인능욕방 적발... 대학원생 등 구속

2025-04-02
국민일보
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI-based deepfake technology to create fake explicit images of women without their consent and distribute them in online chat rooms. This use of AI directly led to violations of human rights and sexual violence laws, fulfilling the criteria for an AI Incident under the framework. The harm is realized and ongoing, involving multiple victims and legal action against perpetrators.
Thumbnail Image

딥페이크 기술 이용 '나체 합성사진' 제작·유포 일당 적발

2025-04-02
pressian.com
Why's our monitor labelling this an incident or hazard?
The article explicitly states that deepfake technology, an AI system that synthesizes faces onto nude images, was used to produce and distribute harmful sexual content without consent. This caused direct harm to the victims' dignity, privacy, and mental health, fulfilling the criteria for an AI Incident under violations of human rights and breach of applicable law. The involvement of AI in the creation and dissemination of these images is central to the harm caused, making this an AI Incident rather than a hazard or complementary information.
Thumbnail Image

대학 동문 얼굴에 나체사진 합성해 유포...지인능욕방 적발 | 연합뉴스

2025-04-02
연합뉴스
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI deepfake technology was used to create fake nude images of university alumni and acquaintances, which were then distributed via social media and messaging platforms. This constitutes a direct violation of human rights and privacy, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, as victims have been identified and the perpetrators arrested. Therefore, this event is classified as an AI Incident due to the direct involvement of AI in causing significant harm through malicious use.
Thumbnail Image

대학 동문 얼굴에 나체사진 합성해 유포...지인 능욕방 적발(종합)

2025-04-02
연합뉴스
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI deepfake technology was used to create fake nude images of individuals without their consent, which were then distributed widely, causing harm to the victims' rights and dignity. This meets the definition of an AI Incident because the AI system's use directly led to violations of human rights and significant harm to individuals. The involvement of AI in the creation of harmful content and its distribution through social networks and messaging apps confirms the presence of an AI system causing direct harm.
Thumbnail Image

"개인정보와 딥페이크로 이딴 짓을"...OO등록증 만든 일당 15명 검거 - 매일경제

2025-04-02
mk.co.kr
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI-based deepfake technology was used to synthesize nude images by combining faces of real women with other images without consent. This use of AI directly caused harm to the victims through privacy violations and distribution of sexualized false content, which constitutes a violation of human rights and personal dignity. The event involves the malicious use of AI systems, resulting in realized harm, meeting the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

대학 동문 나체사진 합성해 '지인 능욕방'에 유포...15명 검거

2025-04-02
아시아경제
Why's our monitor labelling this an incident or hazard?
The article explicitly states that the perpetrator used AI-based deepfake technology to synthesize nude images by combining faces of victims with other bodies, which were then distributed via Telegram groups. This use of AI directly led to violations of human rights, specifically sexual rights and privacy, and caused harm to the victims. The involvement of AI in the creation and dissemination of harmful content meets the criteria for an AI Incident, as the harm is realized and directly linked to the AI system's use.
Thumbnail Image

인하대 재학생, 동문 여성 17명 '딥페이크' 성범죄물 제작·유포

2025-04-02
경향신문
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of deepfake technology, an AI system for image synthesis, to create fake sexual content without consent. This use directly led to harm by violating the victims' rights and distributing illegal content. The involvement of AI in the creation and dissemination of harmful material fits the definition of an AI Incident, as it caused violations of human rights and harm to communities. Therefore, this event is classified as an AI Incident.
Thumbnail Image

대학 동문·지인 얼굴에 나체 사진 합성해 유포한 일당 적발 | 한국일보

2025-04-02
한국일보
Why's our monitor labelling this an incident or hazard?
The article explicitly states that deepfake technology, an AI system for image synthesis, was used to create fake nude images of women without their consent. The distribution of these images on Telegram groups caused direct harm to the victims, violating their rights and constituting a criminal sexual offense. This meets the criteria for an AI Incident because the AI system's use directly led to violations of human rights and harm to individuals. The involvement of AI in the malicious creation and dissemination of harmful content is central to the incident.
Thumbnail Image

절대 안 잡혀" 호언장담...텔레그램 '지인 능욕방' 운영 일당 검거

2025-04-02
�����
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI-based deepfake technology was used to create fake nude images of women without their consent, which were then distributed widely, causing harm to the victims. This use of AI directly led to violations of human rights and privacy, fitting the definition of an AI Incident. The involvement of AI in the creation of harmful content and the resulting legal actions confirm the realized harm, not just a potential risk.
Thumbnail Image

동문·지인 얼굴에 ○○ 합성 '지인능욕방' 만들어 유포한 일당

2025-04-02
문화일보
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI (deepfake) technology was used to synthesize nude images by combining faces of acquaintances with other bodies, which were then distributed in online chat rooms. This constitutes a violation of human rights and privacy, fulfilling the criteria for an AI Incident as the AI system's use directly led to harm. The involvement of AI in creating harmful content and its malicious distribution meets the definition of an AI Incident under violations of human rights and harm to communities.
Thumbnail Image

대학 동문·지인 여성 '능욕방' 적발...15명 무더기 검거

2025-04-02
연합뉴스TV
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI-based deepfake technology to create fake nude images by synthesizing victims' faces onto other bodies, which were then distributed widely via social media and messaging platforms. This use of AI directly led to violations of human rights, specifically privacy and dignity, and caused harm to the victims and their communities. The involvement of AI in the creation and dissemination of harmful content meets the criteria for an AI Incident as the harm is realized and directly linked to the AI system's use.
Thumbnail Image

AI 딥페이크로 대학 동문 등 나체 사진 합성 유포...일당 15명 적발

2025-04-02
뉴스핌
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI deepfake technology to synthesize nude images without consent, which were then distributed widely, causing harm to the victims. This constitutes a violation of human rights and privacy, fulfilling the criteria for an AI Incident. The AI system's use in generating harmful content and its distribution directly led to realized harm, not just potential harm. Therefore, this event is classified as an AI Incident.
Thumbnail Image

대학 동문 얼굴에 나체사진 합성해 유포...지인 능욕방 적발

2025-04-02
스포츠조선
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI (deepfake) technology was used to synthesize faces onto nude images, which were then distributed widely, constituting a violation of human rights and causing harm to individuals. This meets the definition of an AI Incident because the AI system's use directly led to harm (violation of rights and harm to persons). The involvement of AI in the creation of harmful content and its distribution through social networks and messaging apps clearly fits the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

'대학 동문·지인 얼굴'로 성범죄물 제작...'지인 능욕방' 일당 검거

2025-04-02
inews24
Why's our monitor labelling this an incident or hazard?
The article explicitly states that deepfake AI technology was used to synthesize faces onto nude images, creating fake sexual content without consent. This directly led to harm to individuals' rights and reputations, constituting a violation of human rights and personal dignity. The involvement of AI in the creation of harmful content and its distribution meets the criteria for an AI Incident, as the AI system's use directly caused significant harm.
Thumbnail Image

"여자 동문·지인 17명 얼굴에 나체사진 합성"... 텔레그램방 '지인 능욕방' 일당 무더기 검거

2025-04-03
인사이트
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI-based deepfake technology was used to create synthetic sexual images without consent, which were then distributed widely, causing direct harm to the victims. This constitutes a violation of human rights and personal dignity, fitting the definition of an AI Incident. The AI system's use directly led to the harm described, including breaches of privacy and sexual exploitation, thus qualifying this event as an AI Incident rather than a hazard or complementary information.