Seoul launches AI system to detect and report sexual exploitation videos in six minutes

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Seoul City and the Seoul Institute unveiled the nation’s first 24/7 ‘AI automatic deletion-reporting system’ for digital sexual exploitation videos. The AI detects illegal content, compiles evidence, and drafts multilingual removal requests, which are then reviewed by support officers. It slashes reporting time from around three hours to just six minutes.[AI generated]

Why's our monitor labelling this an incident or hazard?

An AI system is explicitly involved in the detection and reporting of illegal sexual exploitation videos, which directly relates to harm to individuals (victims of sexual exploitation) and communities. The AI's use in automating the detection and reporting process reduces the time harmful content remains online, mitigating injury and harm. Since the AI system's use directly addresses and prevents harm, this qualifies as an AI Incident under the definition of harm to persons and communities caused by AI system use.[AI generated]
AI principles
AccountabilityFairnessPrivacy & data governanceRespect of human rightsRobustness & digital securitySafetyTransparency & explainabilityDemocracy & human autonomy

Industries
Government, security, and defenceMedia, social platforms, and marketingDigital securityIT infrastructure and hosting

Harm types
PsychologicalReputationalEconomic/PropertyHuman or fundamental rights

Severity
AI incident

Business function:
Citizen/customer serviceCompliance and justiceMonitoring and quality control

AI system task:
Recognition/object detectionContent generation


Articles about this incident or hazard

Thumbnail Image

AI로 성착취영상 찾아 삭제 신고까지 6분! 전국 최초 개발

2025-05-21
mediahub.seoul.go.kr
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved in the detection and reporting of illegal sexual exploitation videos, which directly relates to harm to individuals (victims of sexual exploitation) and communities. The AI's use in automating the detection and reporting process reduces the time harmful content remains online, mitigating injury and harm. Since the AI system's use directly addresses and prevents harm, this qualifies as an AI Incident under the definition of harm to persons and communities caused by AI system use.
Thumbnail Image

''성범죄 영상 찾아 삭제 신고까지 6분''...서울시, AI 시스템 개발

2025-05-22
매일방송
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system that continuously monitors online content to detect harmful digital sexual crime videos and automatically generates deletion requests. This AI system's use directly aims to prevent harm related to the distribution of illegal and harmful content, which can be considered harm to individuals and communities. Since the AI system is actively used to mitigate harm by speeding up the removal process, this is a case of AI system use with a positive impact rather than causing harm. Therefore, it does not describe an AI Incident or AI Hazard. Instead, it provides information about the deployment of an AI system and its societal response to digital sexual crime, which fits the definition of Complementary Information.
Thumbnail Image

"두려워말고 신고하세요" 성착취 영상 6분이면 삭제한다...서울시, AI 시스템 가동 - 매일경제

2025-05-21
mk.co.kr
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly described as being used to detect and report illegal sexual exploitation videos, which constitute a direct harm to individuals (including children and adolescents) and communities. The AI's role in automating detection and reporting directly contributes to mitigating this harm by enabling faster removal of harmful content. Therefore, this event involves the use of an AI system that has directly led to harm mitigation related to digital sexual crimes, qualifying it as an AI Incident under the framework.
Thumbnail Image

서울시, AI로 '디지털성범죄' 잡는다...삭제 신고까지 '6분'

2025-05-21
아시아경제
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly mentioned as being used to detect illegal digital sexual crime videos and automatically generate deletion reports and emails, which are then sent to hosting sites. This use of AI directly contributes to mitigating harm caused by digital sexual crimes, which are violations of fundamental rights and cause significant harm to victims. Therefore, the event involves the use of an AI system that has directly led to harm mitigation and protection of rights, qualifying it as an AI Incident under the framework.
Thumbnail Image

성범죄 영상 찾아 삭제·신고까지 6분에 끝낸다···서울 'AI 시스템' 도입

2025-05-21
경향신문
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly described as being used to detect and report digital sexual exploitation videos, which are a form of digital sexual crime causing direct harm to victims. The system's deployment leads to faster removal of harmful content, directly addressing and mitigating harm to individuals' rights and well-being. This fits the definition of an AI Incident because the AI system's use directly leads to addressing violations of human rights and harm to communities. The article reports on the system's active use and impact, not just potential or future risks, so it is not a hazard or complementary information. Therefore, the event is classified as an AI Incident.
Thumbnail Image

성착취영상 찾아 삭제신고까지 단 6분...서울시 전국 최초 개발

2025-05-21
연합뉴스TV
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly mentioned as being used to detect and report illegal sexual exploitation videos, which are a form of digital sexual crime causing harm to individuals, especially minors. The system's use directly supports the removal of harmful content, addressing violations of human rights and protecting vulnerable communities. Since the AI system's use is aimed at mitigating harm and is actively deployed, this event is not a hazard or complementary information but an AI Incident involving the use of AI to address and reduce harm related to digital sexual exploitation.
Thumbnail Image

AI가 성범죄 영상 '검색·삭제·신고'...6분 만에 마무리한다

2025-05-21
경향신문
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly described as being used to detect and remove illegal sexual crime videos online, which constitute harm to individuals and communities (harm category d). The system's development and use directly address and mitigate this harm by automating detection and reporting, thus reducing the time harmful content remains accessible. Since the AI system's use is directly linked to preventing and reducing harm from digital sexual crimes, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to harm mitigation and victim support.
Thumbnail Image

AI로 디지털 성범죄물 탐지부터 삭제 요청까지 '6분'

2025-05-21
국민일보
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly described as being used to detect and remove illegal digital sexual crime videos, which constitute violations of human rights and cause harm to individuals and communities. The system's use directly leads to harm mitigation by automating detection and deletion requests, reducing the time from hours to minutes. This is a clear case where the AI system's use is linked to addressing and reducing harm, fitting the definition of an AI Incident. The event does not describe a potential or future harm but an active deployment and use of AI to counteract harm, so it is not an AI Hazard or Complementary Information. It is not unrelated because the AI system is central to the event.
Thumbnail Image

불법 촬영물 찾고 삭제까지 '단 6분'...서울시, AI 자동 삭제 신고 시스템 도입 | 한국일보

2025-05-21
한국일보
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly involved in detecting and removing illegal sexual crime content, which is a violation of human rights and causes harm to victims. The system's use is directly linked to addressing and mitigating this harm by automating and accelerating the removal process. There is no indication that the AI system caused harm; rather, it is used as a tool to reduce harm. Therefore, this event does not describe a new AI Incident or AI Hazard but rather provides complementary information about an AI system deployed to combat existing harms and improve response efficiency.
Thumbnail Image

서울시, AI로 성착취 영상 6분 만에 삭제 신고 - 사회 | 기사 - 더팩트

2025-05-21
더팩트
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved in the detection and reporting of illegal sexual exploitation videos, which are harmful content causing significant harm to individuals and communities. The AI's use directly supports the removal of such content, mitigating harm. Since the AI system's use is directly linked to addressing and reducing harm from illegal content, this qualifies as an AI Incident under the definition of harm to communities and individuals through the use of AI in managing harmful content online.
Thumbnail Image

서울시, "AI로 온라인 성범죄 영상 검출부터 자동신고까지 단 '6분'"

2025-05-21
이투데이
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved, performing detection and automated reporting of illegal sexual crime videos online. The use of this AI system directly leads to harm reduction by enabling faster removal of harmful content, which protects victims and communities from ongoing harm. This constitutes an AI Incident because the AI system's use has directly led to a significant positive impact in preventing and mitigating harm related to digital sexual crimes, which are violations of human rights and cause harm to individuals and communities. The article describes realized harm (digital sexual crime videos) and how the AI system's deployment addresses this harm, thus qualifying as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

서울시 "AI로 6분 내 불법 영상물 찾아내 삭제·신고 처리"

2025-05-21
뉴스핌
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly mentioned as being used to detect illegal videos and automatically generate deletion requests, which directly addresses digital sexual crimes. These crimes constitute violations of human rights and cause harm to individuals and communities. Since the AI system's use directly leads to the removal of harmful content and supports victim assistance, it is involved in an event where AI use has directly led to harm mitigation and protection of rights. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use is directly linked to addressing and reducing harm related to digital sexual crimes.
Thumbnail Image

성착취영상 찾아 삭제신고까지 단 6분...서울시 전국 최초 개발 | 연합뉴스

2025-05-21
연합뉴스
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as monitoring and detecting illegal sexual exploitation videos and automating deletion requests, which directly addresses harm to individuals' rights and well-being (harm to persons). The AI system's deployment reduces the time to remove harmful content, thus mitigating ongoing harm. Since the AI system's use directly leads to harm reduction and protection of victims, this qualifies as an AI Incident under the definition of harm to persons. The article does not describe a potential or future harm but an active system in use that impacts harm outcomes. Hence, the classification is AI Incident.