Surge in AI-Driven Deepfake Digital Sex Crimes in Korea

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

South Korea faces a rising wave of digital sex crimes using AI-generated deepfakes, heavily targeting women and young people. More than 10,000 victims have received support for counseling, content deletion, and legal aid as cases increase sharply, highlighting serious human rights breaches linked to the misuse of AI technology.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves the use of AI systems to generate deepfake sexual content, which has directly led to harm including violations of human rights, psychological harm, and social harm to victims. The crimes are ongoing and have resulted in arrests and legal actions, confirming realized harm. The AI system's role in creating synthetic media is pivotal to the harm described. Therefore, this qualifies as an AI Incident under the framework, as the AI-generated content has directly caused significant harm to individuals and communities.[AI generated]
AI principles
Respect of human rightsPrivacy & data governanceSafetyAccountabilityFairnessRobustness & digital securityTransparency & explainabilityHuman wellbeing

Industries
Media, social platforms, and marketingDigital securityGovernment, security, and defence

Affected stakeholders
WomenChildren

Harm types
Human or fundamental rightsPsychologicalReputational

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Deepfake, AI-related digital sex crimes targeting women, children surge in Korea

2025-04-10
koreajoongangdaily.joins.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to generate deepfake sexual content, which has directly led to harm including violations of human rights, psychological harm, and social harm to victims. The crimes are ongoing and have resulted in arrests and legal actions, confirming realized harm. The AI system's role in creating synthetic media is pivotal to the harm described. Therefore, this qualifies as an AI Incident under the framework, as the AI-generated content has directly caused significant harm to individuals and communities.
Thumbnail Image

Digital sex crime victims surpass 10,000 in South Korea, majority in teens, 20s

2025-04-12
The Straits Times
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the abuse of artificial intelligence to generate deepfakes as a major driver of the increase in digital sex crimes, with thousands of victims seeking assistance. The harms include violations of rights and psychological harm to victims, especially vulnerable teens and young adults. The AI system's use in creating harmful synthetic media directly leads to these harms, fulfilling the criteria for an AI Incident. The government's response is noted but does not change the classification.
Thumbnail Image

Digital sex crime victims surpass 10,000 in Korea, majority in teens, 20s

2025-04-11
The Korea Herald
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the abuse of artificial intelligence to generate deepfakes as a major driver of the increase in digital sex crimes, which have caused direct harm to thousands of victims, mostly teenagers and people in their 20s. The harms include violations of privacy and human rights, and the AI system's misuse is a pivotal factor in these harms. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to significant harm to individuals and communities.
Thumbnail Image

In South Korea, digital sex crimes soar amid rise in AI, deepfake technology

2025-04-11
South China Morning Post
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI-generated deepfake technology to create manipulated images used in digital sex crimes, which have caused direct harm to victims, including minors. The involvement of AI in producing harmful content that leads to violations of human rights and personal harm fits the definition of an AI Incident. The harm is realized and ongoing, not merely potential, thus classifying this as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

AI, deepfake technology fueling digital sex crimes in South Korea

2025-04-12
Wion
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated deepfake technology being used to manipulate images for digital sex crimes, which directly harms victims by violating their rights and causing psychological and social harm. The increase in reported cases and the government's response confirm that harm has materialized. The AI system's use in generating manipulated sexual images is central to the incident, fulfilling the criteria for an AI Incident under violations of human rights and harm to communities.
Thumbnail Image

Deepfake Porn Crisis Fuels Alarming Rise In Digital Sex Crimes In THIS Country: How AI Is Making It Worse

2025-04-12
English Jagran
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI-powered deepfake tools to create manipulated sexual content without consent, which has caused real and significant harm to individuals, including privacy violations and emotional distress. The involvement of AI in the creation and dissemination of this harmful content directly links the AI system's use to the harm experienced by victims. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to violations of human rights and harm to individuals.
Thumbnail Image

디지털 성범죄 피해 첫 1만명 넘어...'10대 여성' 피해 두드러져

2025-04-10
아시아경제
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly through the use of AI-based image synthesis and deepfake technologies that have directly led to digital sexual crimes harming individuals, particularly young women. The harms include violations of human rights and harm to communities. Since the harm is realized and ongoing, this qualifies as an AI Incident rather than a hazard or complementary information. The article details the scale of harm, victim demographics, and the role of AI in enabling these crimes, fulfilling the criteria for an AI Incident.
Thumbnail Image

디지털성범죄피해자 1만명 넘어...딥페이크 피해는 2배 이상 급증 | 연합뉴스

2025-04-10
연합뉴스
Why's our monitor labelling this an incident or hazard?
The article explicitly identifies deepfake technology, an AI system capable of generating synthetic manipulated videos, as a key factor in the increase of digital sexual crime harms. The harms described include victimization through unauthorized synthetic content creation and distribution, which directly violates privacy and causes psychological and social harm. The AI system's use in generating deepfake content is directly linked to these harms, fulfilling the criteria for an AI Incident. The report also provides data on the scale and nature of these harms, confirming that the AI system's role is pivotal and the harm is realized, not merely potential.
Thumbnail Image

"무심코 챗GPT에 내 사진 올렸다간 디지털성범죄에 활용?"...'천조국' 미국에 대부분 서버 두고 유통 - 매일경제

2025-04-10
mk.co.kr
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of OpenAI's ChatGPT to generate synthetic images that emphasize or expose certain body parts, which are then used in digital sexual crimes. This use of AI has directly contributed to harm against victims, including violations of privacy, psychological harm, and exploitation. The harms are materialized and significant, involving violations of human rights and harm to individuals. Hence, the event meets the criteria for an AI Incident as the AI system's use has directly led to harm.
Thumbnail Image

디지털 성범죄 피해자 지원 1만명 넘었다

2025-04-10
경향신문
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of deepfake technology, which is a form of generative AI, in digital sexual crimes leading to significant harm to victims, predominantly young women. The harm includes violations of rights and psychological damage from synthetic and edited content created using AI. Since the AI system's use has directly led to these harms, this fits the definition of an AI Incident under violations of human rights and harm to communities.
Thumbnail Image

디지털 성범죄 피해 지원 1만 명 넘어...'챗GPT' 피해 사례까지 접수 | 한국일보

2025-04-10
한국일보
Why's our monitor labelling this an incident or hazard?
The event involves the use of generative AI systems (e.g., ChatGPT) to create deepfake images that have been used in digital sexual crimes, directly causing harm to individuals. The AI system's use in generating manipulated content that facilitates sexual crimes constitutes a violation of rights and harm to persons. The article reports actual cases of such harm, not just potential risks, thus meeting the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

'딥페이크' 등 디지털성범죄 피해자 1만명 시대...피해자 72% 여성

2025-04-10
이투데이
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of deepfake technology, which is an AI system capable of generating synthetic digital content. The harm described includes psychological and social harm to victims, predominantly women and young people, from digital sex crimes involving AI-generated content. This fits the definition of an AI Incident as the AI system's use has directly led to harm to communities and individuals. The report also highlights the increasing prevalence and severity of these harms, confirming that the AI system's involvement is not hypothetical but actual and ongoing.
Thumbnail Image

디지털성범죄피해자 1만명 넘어 - 전파신문

2025-04-10
jeonpa.co.kr
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the rise in digital sexual crimes involving deepfake and synthetic/edited content created using digital technologies, which are AI systems capable of generating manipulated media. These AI systems have been used to produce harmful content that directly impacts victims, causing violations of privacy and personal rights. The harm is realized and ongoing, as evidenced by the increasing number of victims and support cases. Therefore, this event meets the criteria for an AI Incident due to the direct involvement of AI-generated synthetic content causing harm to individuals.
Thumbnail Image

디지털성범죄피해자 1만명 넘어...딥페이크 피해는 2배 이상 급증

2025-04-10
스포츠조선
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (deepfake technology) in the creation of synthetic sexual content that harms victims, fulfilling the definition of an AI Incident. The harm is realized and ongoing, as evidenced by the increase in victims and support cases. The AI system's use directly leads to violations of personal rights and harm to individuals, meeting the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

"딥페이크 성범죄 227% 증가...10~20대 여성 피해 집중"

2025-04-10
뉴스핌
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses deepfake technology, an AI system capable of generating synthetic but realistic images and videos, being used to create non-consensual sexual content. The resulting harm includes violations of privacy, psychological trauma, and disproportionate impact on young women, which are direct harms caused by the AI system's outputs. The increase in such crimes and the support provided to victims confirm that harm has materialized. Hence, this is an AI Incident as the AI system's use has directly led to violations of rights and harm to individuals and communities.
Thumbnail Image

디지털성범죄 피해자 1만명 넘어...'딥페이크' 피해 3배로 급증

2025-04-10
연합뉴스TV
Why's our monitor labelling this an incident or hazard?
Deepfake technology is an AI system that synthesizes or edits video content to create realistic but fake videos. The article documents a substantial increase in deepfake-related digital sexual crimes, indicating that AI-generated content has directly caused harm to individuals, violating their rights and privacy. This meets the criteria for an AI Incident because the AI system's use has directly led to significant harm to people (violation of rights and harm to communities). The article does not merely warn of potential harm but reports actual incidents and victim support statistics, confirming realized harm.
Thumbnail Image

지난해 디지털 성범죄 피해자 1만 명...'딥페이크' 227% 늘었다

2025-04-10
pressian.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the rise in deepfake-related digital sex crimes, which involve AI systems generating synthetic images or videos that harm victims by violating their privacy and causing psychological harm. The involvement of AI in creating deepfake content directly leads to realized harm, fulfilling the criteria for an AI Incident. The harm includes violations of rights and significant psychological and social damage to victims, primarily women, as detailed in the report.
Thumbnail Image

Wenn das eigene Gesicht in KI-Pornos auftaucht

2025-04-19
shz.de
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used to create deepfake pornographic content without consent, directly causing harm to individuals' rights, dignity, and mental health. This constitutes a violation of human rights and personal rights, fitting the definition of an AI Incident. The article describes realized harm through the distribution and impact of these AI-generated deepfakes, not just potential harm. Therefore, it qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Wenn das eigene Gesicht in KI-Pornos auftaucht

2025-04-19
Yahoo News
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (deepfake generation technologies) to create non-consensual pornographic content, which directly leads to harm to individuals (psychological harm, violation of rights). The article describes realized harm to victims and the societal impact of these AI-generated deepfakes. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to significant harm to persons and violations of rights. The discussion of legal and political responses is complementary information but does not change the primary classification.
Thumbnail Image

Cyberkriminalität: Wenn das eigene Gesicht in KI-Pornos auftaucht

2025-04-19
Der Tagesspiegel
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to create deepfake pornographic content without consent, directly causing significant harm to individuals' rights and well-being, which fits the definition of an AI Incident. The harms include violations of human rights (privacy, personality rights), psychological injury, and community harm through the spread of non-consensual intimate content. The article reports on actual occurrences of these harms, not just potential risks, and discusses the legal and societal responses. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Wenn das eigene Gesicht in KI-Pornos auftaucht

2025-04-19
Freie Presse
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (deepfake generation technology) to create non-consensual pornographic content, which directly harms individuals by violating their rights and causing psychological trauma. The AI system's development and use have directly led to these harms, fulfilling the criteria for an AI Incident. The article also discusses legal and societal responses but the primary focus is on the realized harm caused by AI-generated deepfakes, not just potential or complementary information.
Thumbnail Image

Cyberkriminalität: Wenn das eigene Gesicht in KI-Pornos auftaucht - Frankenpost

2025-04-19
Frankenpost Zeitungen
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems as it discusses AI-generated deepfake pornography, which is a direct application of AI for creating manipulated content. The harms described include violations of personality rights and privacy, which fall under violations of human rights. However, the article does not report a specific new AI Incident (i.e., a particular event where harm has just occurred or been newly discovered) but rather discusses ongoing issues, legal challenges, and policy responses. Therefore, it fits best as Complementary Information, providing context and updates on societal and governance responses to AI harms related to deepfake pornography.
Thumbnail Image

Wenn das eigene Gesicht in KI-Pornos auftaucht

2025-04-19
Nordbayern
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to create deepfake pornographic content without consent, which directly leads to significant harm to individuals' rights and psychological well-being, constituting violations of human rights and harm to communities. The article describes realized harm from the use of AI-generated deepfakes, not just potential harm. Therefore, this qualifies as an AI Incident under the OECD framework because the AI system's use has directly led to violations of rights and harm to persons. The discussion of legal and societal responses is complementary but the core event is the ongoing harm caused by AI-generated deepfake pornography.
Thumbnail Image

Wenn das eigene Gesicht in KI-Pornos auftaucht - Panorama - DIE RHEINPFALZ

2025-04-19
rheinpfalz.de
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to create deepfake pornographic videos without consent, which directly causes harm to individuals by violating their rights and causing emotional and reputational damage. The article explicitly describes realized harm from AI-generated content and discusses the legal and societal implications. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to violations of human rights and harm to communities.
Thumbnail Image

Wenn das eigene Gesicht in KI-Pornos auftaucht - WELT

2025-04-19
DIE WELT
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (deepfake generation technologies) to create manipulated pornographic content without consent, which directly leads to harm to individuals' rights and psychological well-being. This fits the definition of an AI Incident because the AI system's use has directly led to violations of human rights and harm to individuals. The article also discusses legal and societal responses but the primary focus is on the harm caused by the AI misuse, not just complementary information or potential hazards.
Thumbnail Image

Wenn das eigene Gesicht in KI-Pornos auftaucht

2025-04-19
Schwaebische
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to create deepfake pornographic content without consent, which directly leads to harm to individuals' rights and dignity, constituting violations of human rights and personal rights. The harms are realized and ongoing, including psychological harm and violation of privacy. The article also discusses the legal and societal responses but focuses primarily on the harm caused by AI-generated deepfake pornography. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to significant harm to individuals and communities.
Thumbnail Image

Cyberkriminalität: Wenn das eigene Gesicht in KI-Pornos auftaucht

2025-04-19
verlagshaus-jaumann.de
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems in the form of deepfake technology used to create non-consensual pornographic content, which causes harm to individuals (violation of rights). However, it does not report a specific new incident of harm occurring or a particular event where AI use directly led to harm. Instead, it focuses on the broader societal and legal context, ongoing challenges, and policy responses. Therefore, it fits best as Complementary Information, providing context and updates on governance and societal responses to AI-related harms rather than reporting a discrete AI Incident or AI Hazard.
Thumbnail Image

Eigenes Gesicht taucht in KI-Pornos auf: Was Betroffene tun können

2025-04-19
Mittelbayerische.de
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to generate deepfake pornographic content without consent, directly causing significant harm to individuals' rights and well-being. This fits the definition of an AI Incident because the AI system's use has directly led to violations of human rights and harm to individuals. The article details realized harm (non-consensual deepfake pornography) and the societal and legal challenges in responding to it, confirming it as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Wenn das eigene Gesicht in KI-Pornos auftaucht

2025-04-19
Nau
Why's our monitor labelling this an incident or hazard?
Deepfake pornography is generated using AI systems that synthesize realistic but fake images or videos. The non-consensual use of such AI-generated content causes direct harm to individuals' rights and dignity, fitting the definition of an AI Incident under violations of human rights and harm to communities. The article describes ongoing harm rather than potential harm, so it qualifies as an AI Incident.
Thumbnail Image

An wen sich Betroffene wenden können, wenn sie KI-Pornos von sich entdecken

2025-04-19
GMX
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (deepfake generation tools) to create non-consensual pornographic content, which directly harms individuals by violating their rights and causing psychological trauma. This fits the definition of an AI Incident because the AI system's use has directly led to violations of human rights and harm to individuals and communities. The article also discusses legal and societal responses, but the primary focus is on the realized harm caused by AI-generated deepfake pornography, not just potential or complementary information.
Thumbnail Image

Gesichtsraub im Netz! Alptraum Deepfake-Porno

2025-04-19
rtl.de
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (deepfake generation tools) to create and distribute non-consensual pornographic content, directly causing harm to individuals' rights and dignity, which qualifies as violations of human rights and personal rights under the framework. The harm is realized and ongoing, affecting many victims. Therefore, this constitutes an AI Incident as the AI system's use has directly led to significant harm.
Thumbnail Image

Eigenes Gesicht im Deepfake-Porno: Das können Betroffene tun

2025-04-21
DIGITAL FERNSEHEN
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (deepfake generation technology) to create non-consensual pornographic content, which directly leads to significant harm to individuals, including violations of personal rights and psychological harm. This fits the definition of an AI Incident because the AI system's use has directly led to harm (violation of rights and harm to individuals). The article also highlights the legal and societal responses but the primary focus is on the realized harm caused by AI-generated deepfakes, not just potential or complementary information.