Teenagers Arrested for Deepfake Sexual Abuse on Telegram

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

South Korean authorities arrested a high school student and charged 23 others for using AI deepfake technology to synthesize explicit images and videos by merging the faces of celebrities and ordinary women onto nude bodies. The crimes, distributed via Telegram, violated privacy and sexual exploitation laws.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event explicitly involves the use of AI (deepfake technology) to create manipulated pornographic content, which is a direct violation of legal protections and causes harm to individuals' rights and dignity. The involvement of AI in producing and distributing harmful content that has already occurred meets the criteria for an AI Incident, as the harm is realized and directly linked to the AI system's use.[AI generated]
AI principles
AccountabilityPrivacy & data governanceRespect of human rightsRobustness & digital securitySafetyTransparency & explainability

Industries
Media, social platforms, and marketingDigital security

Affected stakeholders
General public

Harm types
Human or fundamental rightsPsychologicalReputational

Severity
AI incident

AI system task:
Content generationRecognition/object detection


Articles about this incident or hazard

Thumbnail Image

연예인 딥페이크 음란물 500개 제작한 10대 구속

2025-05-22
경향신문
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI (deepfake technology) to create manipulated pornographic content, which is a direct violation of legal protections and causes harm to individuals' rights and dignity. The involvement of AI in producing and distributing harmful content that has already occurred meets the criteria for an AI Incident, as the harm is realized and directly linked to the AI system's use.
Thumbnail Image

텔레그램서 연예인 나체 합성물 500개 제작·배포한 10대 구속 | 연합뉴스

2025-05-21
연합뉴스
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI platforms to produce deepfake images and videos, which are realistic fabricated sexual content involving individuals without their consent. This constitutes a violation of human rights and personal dignity, fitting the definition of harm under AI Incident category (c) violations of human rights or breach of applicable law protecting fundamental rights. The involvement of AI in the creation and distribution of these materials directly led to the harm described. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

여성 연예인 나체 합성물 500개 제작 배포한 10대 구속 - 매일경제

2025-05-21
mk.co.kr
Why's our monitor labelling this an incident or hazard?
The event explicitly describes the use of AI-based deepfake technology to create and distribute fake nude images and videos, which is a direct violation of human rights and legal statutes protecting individuals from sexual exploitation and defamation. The involvement of AI in generating these synthetic materials is clear, and the harm is realized through the distribution of these images, causing injury to the dignity and rights of the victims. Hence, it meets the criteria for an AI Incident as the AI system's use has directly led to significant harm.
Thumbnail Image

유명 연예인 나체 딥페이크 제작한 10대 구속 - 스타투데이

2025-05-22
mk.co.kr
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI (deepfake technology) to create manipulated nude images and videos without consent, which were distributed and caused harm. The harms include violations of human rights and legal protections against sexual exploitation, fulfilling the criteria for an AI Incident. The AI system's use directly led to these harms through the creation and dissemination of non-consensual deepfake content.
Thumbnail Image

텔레그램서 연예인 나체 합성물 500개 제작·배포...'간큰 10대' 구속 - 매일경제

2025-05-22
mk.co.kr
Why's our monitor labelling this an incident or hazard?
The article describes the use of AI-based deepfake technology to produce and distribute non-consensual synthetic sexual content, which is a direct violation of human rights and personal dignity. The AI system's outputs were used maliciously to harm individuals, constituting realized harm. The involvement of AI in the creation of these materials and their distribution leading to legal action and arrests confirms this as an AI Incident rather than a hazard or complementary information. The harm is direct, significant, and clearly articulated, fulfilling the definition of an AI Incident.
Thumbnail Image

아이돌 딥페이크 영상 제작·유포 10대들 검거 - 매일경제

2025-05-22
mk.co.kr
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI technology was used to create deepfake sexual exploitation materials, which were then distributed, causing direct harm to the victims through violation of their rights and privacy. The AI system's use in producing and spreading such content is central to the harm caused. Therefore, this event meets the criteria for an AI Incident due to direct harm to persons and violation of legal protections.
Thumbnail Image

유명 연예인 나체 딥페이크 500개 제작·배포 10대 구속

2025-05-22
경향신문
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI or related platforms to synthesize and manipulate images and videos, which is characteristic of deepfake AI systems. The production and distribution of 500 deepfake nude images/videos of celebrities and others is a direct use of AI technology leading to violations of rights and legal breaches, qualifying as harm under the framework. The involvement of AI in the creation of harmful content and its distribution causing real harm to individuals meets the criteria for an AI Incident.
Thumbnail Image

"연예인 얼굴에 나체 합성"...딥페이크 500개 텔레그램서 배포한 10대

2025-05-22
First-Class 경제신문 파이낸셜뉴스
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake content, which is an AI system application. The creation and distribution of such content constitute a violation of human rights and legal protections, fulfilling the criteria for harm under AI Incident definition (c). The direct link between AI use and the harm caused by the false sexual images and videos justifies classification as an AI Incident.
Thumbnail Image

실제처럼 정교연예인 얼굴 합성 불법 콘텐츠 텔레그램 배포 10대 구속

2025-05-21
Wow TV
Why's our monitor labelling this an incident or hazard?
The article describes the creation and distribution of deepfake content using AI platforms, which directly violates laws protecting minors and sexual violence statutes. The AI system's use in synthesizing realistic fake images and videos of individuals constitutes a breach of rights and causes significant harm to the victims and society. The police investigation and arrests confirm the harm has occurred. Therefore, this event qualifies as an AI Incident due to the direct harm caused by the AI system's use in illegal content creation and distribution.
Thumbnail Image

여성 얼굴 합성해 딥페이크 성범죄...10대 남성 구속, 공범 23명 입건

2025-05-21
매일방송
Why's our monitor labelling this an incident or hazard?
The event explicitly describes the use of AI-based deepfake technology to create and distribute sexually explicit synthetic media without consent, which is a direct violation of human rights and sexual protection laws. The AI system's use in producing these harmful materials and their dissemination has directly led to significant harm to individuals, including minors, thus meeting the definition of an AI Incident under violations of human rights and sexual crime laws.
Thumbnail Image

텔레그램서 연예인 나체 합성물 제작·배포한 일당 구속

2025-05-22
연합뉴스TV
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the creation and distribution of AI-generated deepfake sexual images and videos, which are a direct violation of human rights and personal dignity. The use of AI to synthesize faces onto nude bodies and distribute these materials constitutes an AI Incident under the framework, as it directly leads to harm (violation of rights and harm to communities). The involvement of AI in producing these realistic fake images is clear, and the harm is realized, not just potential. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

텔레그램서 연예인 나체 합성물 제작·배포한 일당 구속

2025-05-22
연합뉴스TV
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI-based deepfake technology to create false nude images and videos of women, including celebrities and ordinary individuals, which were then distributed online. This directly leads to violations of human rights and privacy, fitting the definition of an AI Incident under category (c) for violations of human rights or breach of legal protections. The involvement of AI in the creation of these harmful materials and their distribution causing harm to individuals and communities justifies classification as an AI Incident.