Seoul Deploys Generative AI CCTV to Enhance Urban Safety

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Seoul is investing 27.1 billion KRW to upgrade its CCTV network with generative AI, enabling context-aware monitoring and rapid emergency response. The AI system has significantly reduced false alarms and directly contributed to harm prevention, such as detecting a fallen citizen and potential fires, improving public safety outcomes.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves the use of AI systems (intelligent CCTV with generative AI) in real-world safety monitoring. The AI's improved detection and contextual analysis have directly led to preventing harm (e.g., detecting a fallen citizen and a potential fire), which qualifies as injury or harm prevention to persons. Therefore, this constitutes an AI Incident because the AI system's use has directly led to harm prevention and improved safety outcomes, which is a positive form of harm mitigation under the framework. The article does not merely discuss potential or future risks but reports actual AI deployment with demonstrated impact on safety.[AI generated]
Industries
Government, security, and defence

Severity
AI incident

Business function:
Monitoring and quality control

AI system task:
Recognition/object detectionEvent/anomaly detection


Articles about this incident or hazard

Thumbnail Image

서울시, 상황·맥락 읽는 AI로 CCTV 고도화한다

2026-02-19
경향신문
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (generative AI for context-aware surveillance) in public safety infrastructure. However, there is no indication that the AI system has caused any injury, rights violations, property harm, or other significant harms. The article discusses improvements in accuracy and reduction of false alarms, which are positive outcomes. Since no harm has occurred but the AI system's deployment could plausibly lead to future incidents if misused or malfunctioning, this fits the definition of an AI Hazard. However, the article mainly reports on the deployment and improvement efforts without highlighting specific risks or warnings about plausible future harm. Given this, the event is best classified as Complementary Information, as it provides context and updates on AI system deployment and improvements without reporting an incident or hazard.
Thumbnail Image

서울시, '생성형 AI 관제' 도입...CCTV 안전정책 패러다임 전환

2026-02-19
문화일보
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (generative AI for surveillance) in a public safety context. However, the article focuses on the planned introduction and enhancement of this AI system, with no mention of any realized harm or incident resulting from its use. The description suggests potential future benefits and improvements rather than any current or past harm. Therefore, this qualifies as Complementary Information, as it provides context and updates on AI deployment in urban safety without reporting an AI Incident or AI Hazard.
Thumbnail Image

"AI가 사고 맥락까지 확인"... 서울시, 271억 투자해 CCTV 관제 효율 높인다

2026-02-19
이투데이
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (intelligent CCTV with generative AI) in real-world safety monitoring. The AI's improved detection and contextual analysis have directly led to preventing harm (e.g., detecting a fallen citizen and a potential fire), which qualifies as injury or harm prevention to persons. Therefore, this constitutes an AI Incident because the AI system's use has directly led to harm prevention and improved safety outcomes, which is a positive form of harm mitigation under the framework. The article does not merely discuss potential or future risks but reports actual AI deployment with demonstrated impact on safety.
Thumbnail Image

"AI가 사고 맥락까지 판단"...서울시, CCTV 관제 패러다임 전환

2026-02-19
아시아투데이
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (generative AI integrated with intelligent CCTV) actively monitoring and analyzing situations to detect risks and abnormal behavior. The AI system's deployment and operation have directly contributed to preventing harm to people, as evidenced by specific cases where emergencies were detected and addressed promptly. This fits the definition of an AI Incident, as the AI system's use has directly led to harm prevention (a form of injury or harm to health avoided). Although the article focuses on positive outcomes, the AI system's role in influencing physical environments and public safety is clear and material.
Thumbnail Image

CCTV 고도화 나선 서울시, '생성형AI' 관제 도입 - 월요신문

2026-02-19
월요신문
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the deployment and planned expansion of AI systems (generative AI and small language models) in public CCTV surveillance for context-aware monitoring and decision-making. No actual harm or incident is reported; rather, the article focuses on the development and pilot testing phase. Given the nature of AI surveillance systems and their potential to infringe on privacy and civil rights, the event plausibly could lead to AI incidents in the future. Hence, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because AI systems are central to the described developments.
Thumbnail Image

서울, AI로 도시 안전의 새로운 장을 열다

2026-02-19
브릿지경제
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the deployment and use of AI systems (intelligent CCTV and generative AI monitoring) that have directly improved safety outcomes by reducing false alarms and enabling timely emergency responses. It provides concrete examples where AI detection led to immediate intervention preventing harm to individuals. This constitutes direct involvement of AI in preventing injury or harm to people, fulfilling the criteria for an AI Incident. The event is not merely a potential risk or a complementary update but a realized impact of AI on urban safety.
Thumbnail Image

서울시, 상황 읽는 '생성형 AI 관제 시스템' 도입

2026-02-19
데일리안
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (generative AI for surveillance and context-aware risk detection) that has directly led to harm prevention, specifically injury avoidance by detecting a fallen citizen and enabling timely emergency response. The AI system's development and use have demonstrably improved public safety outcomes, which qualifies as an AI Incident under the definition of harm to a person resulting from AI system use. The article reports realized benefits and harm mitigation, not just potential risks or general information, so it is not a hazard or complementary information.
Thumbnail Image

"맥락 읽는 눈" 가진 서울 CCTV...271억 투입해 생성형 AI 도입

2026-02-19
파이낸스투데이
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (generative AI integrated with intelligent CCTV) in active surveillance and emergency response. The AI system's outputs have directly led to harm prevention (e.g., detecting a fallen citizen and a fire situation), which constitutes injury or harm prevention to persons, fitting the definition of an AI Incident. The article reports realized benefits and harm mitigation, not just potential risks or future hazards. Therefore, this qualifies as an AI Incident.
Thumbnail Image

서울시, 지능형 CCTV 고도화 지속·생성형 AI 시범 도입

2026-02-19
국토일보
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of intelligent CCTV systems employing AI for object detection and situational analysis, which have already resulted in concrete safety interventions that protected citizens from harm. This meets the definition of an AI Incident because the AI system's use has directly led to harm prevention (a form of injury or harm avoidance to persons). The planned generative AI pilot is a future development but the current operational AI systems have demonstrably contributed to safety. Hence, the event is best classified as an AI Incident rather than a hazard or complementary information.