South Korean University Students Use AI Deepfakes for Sexual Harassment and Exploitation

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A survey by the Korean Women's Policy Institute found that about 20% of male university students who created AI-generated deepfake images or videos did so for sexual gratification or to harass others. The study highlights significant gender differences in awareness and emotional impact of deepfake-related sexual crimes on campuses in South Korea.[AI generated]

Why's our monitor labelling this an incident or hazard?

Deepfake technology is an AI system that generates synthetic media. The article documents actual use of this AI system to create harmful content for sexual exploitation and harassment, which constitutes violations of human rights and harm to individuals and communities. The harms are realized, not hypothetical, and the AI system's role is pivotal in enabling these harms. Therefore, this event meets the criteria for an AI Incident.[AI generated]
AI principles
Respect of human rightsSafetyFairnessPrivacy & data governanceAccountabilityTransparency & explainability

Industries
Education and training

Affected stakeholders
Women

Harm types
PsychologicalReputationalHuman or fundamental rights

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

딥페이크 제작 男대학생 5명중 1명 "성욕충족·상대 괴롭히려고" | 연합뉴스

2026-01-17
연합뉴스
Why's our monitor labelling this an incident or hazard?
The article involves AI systems implicitly through the use of deepfake technology (an AI system generating synthetic images/videos). However, it does not describe a specific AI incident where harm has occurred or a hazard event where harm is imminent. Instead, it presents research findings on attitudes and behaviors related to AI-generated deepfake content, which informs understanding of potential social harms and risks. This fits the definition of Complementary Information, as it enhances understanding of AI-related societal issues without reporting a new incident or hazard.
Thumbnail Image

"성적욕구 충족" "괴롭히고 싶어서"...딥페이크 왜 만드냐 묻자 끔찍한 대답이 - 매일경제

2026-01-18
mk.co.kr
Why's our monitor labelling this an incident or hazard?
Deepfake technology is an AI system that generates synthetic media. The article documents actual use of this AI system to create harmful content for sexual exploitation and harassment, which constitutes violations of human rights and harm to individuals and communities. The harms are realized, not hypothetical, and the AI system's role is pivotal in enabling these harms. Therefore, this event meets the criteria for an AI Incident.
Thumbnail Image

딥페이크 제작 남학생 5명 중 1명 "성적 목적·괴롭힘 위해"

2026-01-18
아시아경제
Why's our monitor labelling this an incident or hazard?
Deepfake technology is an AI system that generates synthetic images or videos. The article details how male students have used this AI system to create content for sexual gratification and harassment, which are direct harms to individuals (sexual harm) and communities (harassment and victim blaming). The harms are realized, not hypothetical, as the study reports actual creation and use of deepfakes for these purposes. This meets the criteria for an AI Incident because the AI system's use has directly led to violations of rights and harm to individuals and communities.
Thumbnail Image

男대학생 5명중 1명 "성욕충족·남 괴롭히려 딥페이크 제작

2026-01-18
국민일보
Why's our monitor labelling this an incident or hazard?
Deepfake technology is an AI system that generates synthetic media. The article documents actual use of this AI system by male students to create harmful content for sexual gratification and harassment, which are direct violations of rights and cause harm to individuals and communities. The survey data confirms that these harms are occurring, not just potential. Hence, this qualifies as an AI Incident due to the direct involvement of AI in causing human rights violations and harm to communities.
Thumbnail Image

대학 내 딥페이크 성범죄 확산...절반 "성적 욕구 때문" - 사회 | 기사 - 더팩트

2026-01-18
더팩트
Why's our monitor labelling this an incident or hazard?
Deepfake technology is an AI system that generates synthetic media. The article describes actual production and distribution of deepfake sexual content, which directly harms individuals' rights and causes emotional and social harm. This meets the definition of an AI Incident because the AI system's use has directly led to violations of rights and harm to communities. The article does not merely warn of potential harm but documents ongoing harm and victimization, thus qualifying as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

성욕 충족하거나 괴롭히려고" 딥페이크 제작 男 대학생의 주된 목적

2026-01-18
문화일보
Why's our monitor labelling this an incident or hazard?
Deepfakes are AI-generated synthetic media, so the AI system involvement is explicit. The article reports that some male students have used AI to create deepfake content for sexual gratification or harassment, which constitutes violations of rights and harm to individuals. The harms are realized, as evidenced by the psychological impact and social consequences described. Thus, this is an AI Incident because the AI system's use has directly led to harm as defined in the framework.
Thumbnail Image

딥페이크 제작 男대학생 5명 중 1명 "성적 목적·괴롭히려고

2026-01-18
국제신문
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (deepfake generation) to create synthetic images and videos. The survey reveals that a significant portion of male students used these AI-generated deepfakes for sexual gratification and harassment, which are clear violations of human rights and cause harm to individuals and communities. The harms are realized and documented, not merely potential. Hence, this is an AI Incident due to the direct link between AI system use and harm.
Thumbnail Image

딥페이크 제작 남대생 5명 중 1명꼴 "성욕충족·상대 괴롭힘 목적"

2026-01-18
쿠키뉴스
Why's our monitor labelling this an incident or hazard?
Deepfake technology is an AI system that generates synthetic images and videos. The article describes actual use of deepfakes by male students to harass or exploit others sexually, which constitutes harm to individuals and violations of rights. The harm is realized, not just potential, and the AI system's use is central to the incident. Therefore, this event meets the criteria for an AI Incident due to direct harm caused by AI-generated content.
Thumbnail Image

딥페이크 제작 男대학생 5명 중 1명 "성욕 충족, 괴롭히려고" | 중앙일보

2026-01-18
중앙일보
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (deepfake technology) to create manipulated images and videos. The reported harms include sexual exploitation, harassment, and psychological harm to victims, which are violations of human rights and harm to communities. Since these harms have already occurred as per the study's findings, this qualifies as an AI Incident. The article does not merely warn of potential harm but documents realized harm through the use of AI-generated deepfake content.
Thumbnail Image

男대학생 5명 중 1명, 딥페이크 제작 이유로 "성욕 충족·상대방 괴롭힘

2026-01-19
투데이신문
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (deepfake technology) to create manipulated images and videos that have been used for sexual harassment and non-consensual exploitation, which are clear violations of human rights and cause harm to individuals and communities. The harms are realized, not hypothetical, as evidenced by reported victimization and psychological impacts. The AI system's use in producing these harmful materials directly leads to these harms, meeting the criteria for an AI Incident rather than a hazard or complementary information.