Seoul's AI System Rapidly Deletes Digital Sexual Crime Content Nationwide

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Seoul City developed an AI system that detects and deletes illegal digital sexual exploitation content online, reducing removal time from 3 hours to 6 minutes and increasing accuracy. The technology, credited with significantly increasing deleted harmful content, is now being distributed free to institutions across South Korea to better protect victims.[AI generated]

Why's our monitor labelling this an incident or hazard?

The AI system is explicitly described as being used to detect and remove illegal and harmful content related to digital sexual crimes, which directly protects victims from ongoing harm. The use of AI here is central to reducing harm and supporting victim protection, thus the event involves the use of an AI system that has directly led to harm mitigation. Therefore, this qualifies as an AI Incident because the AI system's use is directly linked to preventing and addressing violations of human rights and harm to individuals and communities.[AI generated]
Industries
Government, security, and defence

Severity
AI incident

AI system task:
Recognition/object detection


Articles about this incident or hazard

Thumbnail Image

서울시 '디지털성범죄 AI 삭제기술' 전국 무상 보급

2026-03-02
경향신문
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly described as being used to detect and remove illegal and harmful content related to digital sexual crimes, which directly protects victims from ongoing harm. The use of AI here is central to reducing harm and supporting victim protection, thus the event involves the use of an AI system that has directly led to harm mitigation. Therefore, this qualifies as an AI Incident because the AI system's use is directly linked to preventing and addressing violations of human rights and harm to individuals and communities.
Thumbnail Image

서울시, 독자 개발한 '디지털 성범죄 AI 삭제 기술' 무상 보급

2026-03-02
경향신문
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly described as being used to detect and delete illegal content related to digital sexual crimes, which directly reduces harm to victims and communities by preventing the spread of harmful material. The AI's use in this context is a clear example of AI deployment leading to harm mitigation. Since the AI system's use directly addresses and reduces harm, this qualifies as an AI Incident under the framework, as it involves the use of an AI system to prevent or reduce harm related to human rights violations and harm to communities.
Thumbnail Image

서울시 '디지털성범죄 AI 삭제기술'... 무상이전 나선다

2026-03-02
아시아경제
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system developed and used for detecting and deleting illegal sexual exploitation content online, which is a direct application of AI technology. The system's deployment has led to concrete harm reduction by removing harmful content faster and more effectively, thus protecting victims and communities from ongoing harm. The AI system's role is pivotal in this harm mitigation, fulfilling the criteria for an AI Incident as it directly leads to harm reduction and protection of rights. The event is not merely about AI development or potential harm but about active use resulting in realized positive impact on harm related to digital sexual crimes.
Thumbnail Image

서울이 개발한 '디지털 성범죄 추적 AI' 전국에 무상 보급

2026-03-02
�����
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly mentioned and is used in practice to detect and remove illegal sexual exploitation content, directly reducing harm to victims (injury or harm to persons). The event reports on the use and distribution of this AI system to support victim protection nationwide, indicating realized harm mitigation rather than potential harm. Therefore, this qualifies as an AI Incident because the AI system's use directly leads to harm reduction and addresses a serious social harm (digital sexual crimes).
Thumbnail Image

서울시, '디지털성범죄 AI 삭제기술' 전국 보급...3일부터 무상 이전 - 사회 | 기사 - 더팩트

2026-03-02
더팩트
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly described as being used to detect and delete illegal sexual exploitation content online, which directly addresses and reduces harm to individuals (harm to health and dignity) and protects human rights. The system's deployment and use have already led to increased removal of harmful content, indicating realized harm mitigation. Therefore, this qualifies as an AI Incident because the AI system's use directly leads to harm reduction and protection of rights, which falls under violations of human rights and harm to communities. The article does not describe potential harm or future risks but rather an active, beneficial use of AI to prevent and remediate harm.
Thumbnail Image

서울시 '디지털성범죄 AI 삭제기술' 전국 확산 시도⋯무상 기술이전 나선다

2026-03-02
이투데이
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as detecting and deleting illegal sexual exploitation videos online, which directly addresses harm to individuals and communities (human rights violations and harm to communities). The AI system's use has materially reduced harm by speeding up detection and deletion, thus preventing further victimization. The article reports on the system's deployment and impact, not just potential risks or future hazards. Hence, it is an AI Incident rather than a hazard or complementary information. The AI system's development and use have directly led to harm mitigation, fulfilling the criteria for an AI Incident.
Thumbnail Image

'영상 삭제 3시간→6분'...서울시, '디지털성범죄 AI 삭제기술' 전국화

2026-03-02
아시아투데이
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly mentioned as being used to monitor and automatically delete illegal sexual crime videos, which directly protects victims from harm and prevents the spread of harmful content. This constitutes a direct use of AI leading to harm reduction and victim protection, fitting the definition of an AI Incident because the AI system's use directly addresses and mitigates harm to people and communities. The article does not describe potential or future harm but rather the realized positive impact of the AI system in reducing harm.
Thumbnail Image

디지털 성범죄 쫓는 AI...서울시 독자 개발해 전국에 무상 보급 | 중앙일보

2026-03-02
중앙일보
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly described as detecting and deleting illegal digital sexual crime content, which directly protects individuals from harm related to digital sexual crimes (harm to persons). The AI's use has led to a significant increase in the removal of harmful content and faster response times, indicating realized harm mitigation. Therefore, this event involves the use of an AI system that has directly led to harm reduction, qualifying it as an AI Incident rather than a hazard or complementary information. The article focuses on the AI system's deployment and impact on reducing harm rather than just describing the technology or policy responses alone.
Thumbnail Image

서울시, 디지털 성범죄 AI 삭제 기술 전국 무상 보급... 피해자 보호 강화

2026-03-02
Head Topics
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system developed and used to detect and delete illegal digital sexual crime content, which directly protects victims by removing harmful material from online platforms. The AI system's deployment has led to a significant increase in deleted harmful content and faster response times, directly reducing harm to individuals and communities. This fits the definition of an AI Incident because the AI system's use has directly led to harm mitigation and protection of victims, which is a positive form of harm management but still qualifies as an incident involving AI and harm. The event is not merely a general AI product announcement or a future risk but a concrete use of AI to address and reduce harm, thus it is an AI Incident.
Thumbnail Image

디지털 성범죄 쫓는 AI 기술...서울시, 전국에 무상으로 보급 | 중앙일보

2026-03-02
중앙일보
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly described as being used to detect and delete illegal digital sexual crime content, which directly addresses violations of human rights and harm to individuals. The system's use has led to the removal of harmful content, thus mitigating ongoing harm. Since the AI system's use is directly linked to addressing and reducing harm caused by digital sexual crimes, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to harm mitigation related to violations of rights and harm to individuals.