South Korea Deploys AI System for Automated Detection and Removal of Digital Sexual Exploitation Content

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

South Korea's Ministry of Gender Equality and Family launched an AI-powered system to automatically detect, report, and request deletion of digital sexual exploitation content, including deepfakes, across about 20,000 websites. The system automates and accelerates victim protection, significantly increasing detection rates and reducing processing time to under one minute per case.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves the use of an AI system explicitly described as detecting harmful content related to digital sexual crimes and automating deletion requests. The AI system's use directly contributes to preventing harm to victims of sexual exploitation and abuse, which falls under harm to persons (a). Since the AI system is actively used to mitigate and respond to ongoing harm, this qualifies as an AI Incident rather than a hazard or complementary information. The article does not merely discuss potential risks or responses but reports on an operational AI system that has a direct role in harm prevention and victim protection.[AI generated]
Industries
Government, security, and defenceDigital security

Severity
AI incident

Business function:
Monitoring and quality control

AI system task:
Recognition/object detection


Articles about this incident or hazard

Thumbnail Image

디지털성범죄물 AI로 24시간 탐지·신고..."건당 처리시간 단축" | 연합뉴스

2026-03-31
연합뉴스
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system explicitly described as detecting harmful content related to digital sexual crimes and automating deletion requests. The AI system's use directly contributes to preventing harm to victims of sexual exploitation and abuse, which falls under harm to persons (a). Since the AI system is actively used to mitigate and respond to ongoing harm, this qualifies as an AI Incident rather than a hazard or complementary information. The article does not merely discuss potential risks or responses but reports on an operational AI system that has a direct role in harm prevention and victim protection.
Thumbnail Image

디지털성범죄물, AI로 24시간 탐지·신고···"건당 처리시간 1분 이내"

2026-03-31
경향신문
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved in the real-time detection and automated deletion request of illegal and harmful digital sexual crime content. The system's use directly addresses harm related to child and youth sexual exploitation, which constitutes harm to individuals and communities. Since the AI system's use is actively preventing and mitigating harm by enabling rapid response and removal of such content, this qualifies as an AI Incident under the definition of causing or addressing harm related to human rights violations and harm to communities. The event is not merely a potential risk or a complementary update but describes an active system in use with direct impact on harm reduction.
Thumbnail Image

AI로 '디지털 성범죄' 잡는다⋯자동 탐지·삭제 시스템 구축

2026-03-31
이투데이
Why's our monitor labelling this an incident or hazard?
The AI systems are explicitly mentioned as being used for automated detection and deletion of illegal and harmful content related to digital sexual crimes. Their use directly addresses violations of human rights and protects individuals from harm. Since the AI systems' deployment leads to direct mitigation of harm and protection of victims, this qualifies as an AI Incident under the framework, as the AI system's use is directly linked to preventing and responding to harm.
Thumbnail Image

AI로 디지털 성범죄 선제 대응...성평등가족부, '피해자 보호 시스템' 본격 가동

2026-03-31
뉴스핌
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems deployed for automated detection and removal of digital sexual exploitation content and deepfake detection, which are AI systems as per the definition. The AI systems are used in the development and use phases to enhance victim protection and reduce harm. However, the article does not report any harm caused by the AI systems themselves, nor does it describe any plausible future harm from these AI systems. Instead, it reports on the positive application of AI to prevent and mitigate harm. This fits the definition of Complementary Information, as it details governance and societal responses to digital sexual crimes using AI technology, improving understanding of AI's role in harm reduction.
Thumbnail Image

디지털 성범죄물, AI로 24시간 자동 탐지해 신고..."건당 처리시간 1분"

2026-03-31
아시아경제
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly mentioned as being used for 24/7 automatic detection and deletion of illegal sexual exploitation videos and deepfake content. The system's deployment directly impacts the removal of harmful content, thereby addressing violations of human rights and harm to communities. Since the AI system's use is directly linked to mitigating ongoing harm from digital sexual crimes, this qualifies as an AI Incident under the framework, specifically under violations of human rights and harm to communities.
Thumbnail Image

'디지털 성범죄'AI로 24시간 자동 신고...자동탐지·삭제 시스템 구축

2026-04-01
아시아투데이
Why's our monitor labelling this an incident or hazard?
The event involves AI systems actively used to detect and remove digital sexual crime content, which directly relates to harm to persons (victims of sexual exploitation and abuse). The AI's deployment is not merely potential or advisory but operational and producing real-world effects in protecting victims and reducing harm. Hence, this qualifies as an AI Incident because the AI system's use is directly linked to addressing and mitigating significant harm to individuals and communities.
Thumbnail Image

디지털 성 착취물 1분 안에 잡는다...AI로 자동 탐지·신고

2026-04-01
연합뉴스TV
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved in the detection and automated reporting of harmful digital sexual exploitation content, which directly relates to harm to individuals (children and adolescents) through exploitation and abuse. The AI system's use has led to realized harm mitigation by enabling faster removal of harmful content, thus preventing further harm. Therefore, this qualifies as an AI Incident because the AI system's use directly addresses and mitigates harm related to digital sexual exploitation, a serious violation of human rights and harm to vulnerable groups.
Thumbnail Image

[헬로티 HelloT] AI로 '디지털 성범죄' 탐지·분석·신고까지...건당 1분 이내 처리

2026-04-01
hellot.net
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems explicitly designed to detect and respond to digital sexual crimes, which are serious violations of human rights and cause harm to individuals and communities. The AI system's development and use directly lead to harm mitigation by enabling faster and more accurate detection and removal of harmful content, thus protecting victims. This fits the definition of an AI Incident because the AI system's use is directly linked to addressing and reducing harm related to digital sexual exploitation, a clear violation of rights and harm to communities.