South Korean Police Deploy AI to Combat Election Misinformation

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Ahead of local elections, South Korea's National Police Agency escalated its election crime response to the highest level, deploying AI-driven systems to detect and analyze AI-manipulated fake news and disinformation. The initiative aims to swiftly address election-related crimes and protect the integrity of the democratic process.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves the use of AI systems to analyze AI-manipulated content that spreads false information during elections, which is a direct factor in combating harm to communities and the electoral process. The police are actively using AI to detect and respond to such harms, indicating realized harm from AI-generated disinformation. Therefore, this qualifies as an AI Incident because the AI system's use is directly linked to addressing harms caused by AI-generated false content affecting the election.[AI generated]
Industries
Government, security, and defence

Severity
AI incident

Business function:
Compliance and justice

AI system task:
Event/anomaly detection


Articles about this incident or hazard

Thumbnail Image

경찰, 선거사범 대응 체계 3단계 격상...AI 가짜뉴스 정조준 - 사회 | 기사 - 더팩트

2026-05-10
더팩트
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated fake news as a target of police action and the use of AI analysis tools to investigate such content. This confirms AI system involvement in the context of election misinformation. However, it does not describe a specific realized harm or incident caused by AI-generated fake news, nor does it describe a plausible future harm scenario without current harm. Instead, it focuses on the escalation of law enforcement response and technical measures to address AI-related election crimes. This fits the definition of Complementary Information, as it details governance and societal responses to AI harms rather than reporting a new AI Incident or AI Hazard.
Thumbnail Image

경찰, 14일부터 선거사범 대응 최고 단계 격상

2026-05-10
뉴스핌
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to analyze AI-manipulated content that spreads false information during elections, which is a direct factor in combating harm to communities and the electoral process. The police are actively using AI to detect and respond to such harms, indicating realized harm from AI-generated disinformation. Therefore, this qualifies as an AI Incident because the AI system's use is directly linked to addressing harms caused by AI-generated false content affecting the election.
Thumbnail Image

경찰, '선거사범 대응' 최고 수준 격상...AI 조작 잡는다

2026-05-10
아시아경제
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system explicitly mentioned as part of a law enforcement response to detect and analyze AI-manipulated content related to election crimes. The harm involved is the distortion of voter intent and potential undermining of democratic processes, which constitutes harm to communities. Since the AI system is used to address ongoing or imminent harms related to election misinformation, and the article describes active deployment of this AI system in a real-world context, this qualifies as an AI Incident. The AI system's use is directly linked to addressing harms caused by AI-manipulated content, fulfilling the criteria for an AI Incident.
Thumbnail Image

경찰, 지방선거 불법행위 엄정 대응...24시간 수사체제 강화

2026-05-10
아시아투데이
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system used by police to analyze and respond to AI-manipulated false content in elections. The AI system is involved in the use phase (investigative use) to detect and track harmful AI-generated disinformation. However, the article does not report that AI-generated content has directly caused harm or that an AI system malfunctioned or was misused to cause harm. Instead, it describes a governance and law enforcement response to potential AI-related election crimes. This fits the definition of Complementary Information, as it details societal and governance responses to AI-related risks without describing a new AI Incident or AI Hazard.
Thumbnail Image

경찰, 6·3 지방선거 앞두고 선거범죄 대응 최고 수위 격상

2026-05-10
kgnews.co.kr
Why's our monitor labelling this an incident or hazard?
The article mentions the use of AI technology for detecting and analyzing manipulated content to prevent election-related crimes, which is a governance and operational measure. There is no indication that an AI system has caused harm or that an AI-related incident has occurred. The focus is on preparedness and response to potential AI-enabled misinformation, which fits the definition of Complementary Information as it provides context and updates on societal and governance responses to AI-related risks without describing a new AI Incident or AI Hazard.
Thumbnail Image

경찰, 14일부터 선거사범 대응 3단계 격상...신속 수사 방침

2026-05-10
국제뉴스
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly mentioned as being used to analyze manipulated content related to election crimes. The use of AI in this context is part of the police's active response to ongoing or imminent harms caused by disinformation and black propaganda in elections, which can disrupt democratic processes and harm communities. Since the AI system's use is directly linked to addressing and mitigating harms related to election crimes, this qualifies as an AI Incident involving the use of AI systems to counteract harm.
Thumbnail Image

경찰, 선거사범 대응 3단계로...AI 가짜뉴스 정조준

2026-05-10
YTN
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated fake videos and news being used in the context of elections, which can harm communities and democratic processes (harm to communities). Although no specific incident of harm is reported as having occurred yet, the police's elevated response and tracking of AI models indicate recognition of a credible risk. The AI system's use in generating and spreading fake content could plausibly lead to an AI Incident if unchecked. Since the article focuses on the potential threat and law enforcement's preparatory measures rather than a realized harm, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

전북 지방선거 본선 돌입...경찰 총력전

2026-05-10
뉴스프리존
Why's our monitor labelling this an incident or hazard?
The article involves AI systems in the form of AI-generated fake news and deepfake videos, which are recognized as potential tools for election interference and misinformation. The police's establishment of a response system indicates recognition of a plausible risk of harm to the electoral process and voters' rights. Since no actual harm or incident has been reported yet, but there is a credible risk that AI-generated disinformation could disrupt the election and violate rights, this situation qualifies as an AI Hazard rather than an AI Incident. The article focuses on the potential for harm and the preventive response rather than describing a realized harm event.
Thumbnail Image

[자막뉴스] 선거 코앞인데 시장 유세까지 조작... 경찰 '3단계 최고 수준' 대응 격상

2026-05-11
YTN
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating fake videos used in election campaigns, which is a direct misuse of AI technology causing harm to communities by spreading misinformation and undermining democratic rights. The police response to track and counteract these AI-generated manipulations confirms the AI system's role in causing harm. Therefore, this qualifies as an AI Incident due to realized harm from AI misuse in the election context.