South Korean Prosecution Mobilizes Against AI-Generated Election Misinformation

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Ahead of the June 3 local elections, South Korea's prosecution, led by acting Prosecutor General Koo Ja-hyun, is mobilizing 600 investigators to strictly address AI-generated fake news and black propaganda. The crackdown targets AI-driven misinformation, including deepfakes, to protect election integrity and public trust.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions the misuse of AI technology to create and spread fake news, including deepfake videos, which are forms of AI-generated misinformation. This misinformation is a form of harm to communities and democratic processes, fulfilling the criteria for harm under AI Incident definition (harm to communities). The prosecution's response indicates that such harms have already occurred or are ongoing, not merely potential. Hence, the event involves the use and misuse of AI systems leading directly or indirectly to harm, qualifying it as an AI Incident rather than a hazard or complementary information.[AI generated]
AI principles
Democracy & human autonomyTransparency & explainability

Industries
Government, security, and defence

Affected stakeholders
General public

Harm types
Public interestHuman or fundamental rights

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

검찰총장 대행 "선거사건, 공정성 가장 중요...법과 원칙 따라야" | 연합뉴스

2026-04-13
연합뉴스
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI technology being used to create fake news and black propaganda that could affect election results, which involves AI systems generating harmful misinformation. However, it does not describe a realized harm or a specific event where AI-generated content has caused actual damage or violation. Instead, it emphasizes the prosecution's strict response and preventive measures. Therefore, this is best classified as Complementary Information, as it provides context on societal and governance responses to AI-related election misinformation threats without reporting a concrete AI Incident or AI Hazard event.
Thumbnail Image

구자현 검찰총장 대행 "AI 가짜뉴스·흑색선전 엄정 대응" - 사회 | 기사 - 더팩트

2026-04-13
더팩트
Why's our monitor labelling this an incident or hazard?
The article involves AI systems in the context of AI-generated fake news and black propaganda, which could plausibly lead to harm such as misinformation affecting election integrity and societal trust. However, the article does not report an actual incident of harm caused by AI, but rather a call for strict enforcement and vigilance against such potential harms. Therefore, it fits the definition of Complementary Information, as it provides governance and societal response context to AI-related risks without describing a realized AI Incident or a specific AI Hazard event.
Thumbnail Image

구자현 檢총장 대행 "가짜뉴스 등 선거범죄 엄정 대응"...6·3 지선 수사 역량 총동원

2026-04-13
아시아경제
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the misuse of AI technology to create and spread fake news, including deepfake videos, which are forms of AI-generated misinformation. This misinformation is a form of harm to communities and democratic processes, fulfilling the criteria for harm under AI Incident definition (harm to communities). The prosecution's response indicates that such harms have already occurred or are ongoing, not merely potential. Hence, the event involves the use and misuse of AI systems leading directly or indirectly to harm, qualifying it as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

구자현 검찰총장 대행 "법과 원칙 따른 공정한 선거사건 처리"

2026-04-13
NewsTomato
Why's our monitor labelling this an incident or hazard?
The article mentions AI technology in the context of increasingly sophisticated fake news that could influence elections, which is a plausible future risk (AI Hazard). However, it does not describe any realized harm or specific event where AI systems have directly or indirectly caused harm. The main content is about prosecutorial guidance and readiness to address election crimes, including those potentially involving AI-generated misinformation. Therefore, this is best classified as Complementary Information, providing context and governance response to AI-related concerns in elections without reporting a new incident or hazard.
Thumbnail Image

검찰총장 대행 "중대 선거범죄 엄정 대응, 모든 역량 집중해 달라"

2026-04-13
파이낸셜뉴스
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI technology misuse in the context of fake news as a serious election crime that the prosecution will address. This indicates the presence of an AI system (AI-generated fake news) potentially causing harm to communities by influencing voter decisions. However, the article is primarily about the prosecution's planned response and enforcement strategy rather than reporting an actual AI-driven election crime incident. Since no specific harm has yet occurred or been detailed, but a plausible risk is acknowledged, this fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

AI disinformation tests South Korean laws ahead of local elections

2026-05-07
The Straits Times
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating false and misleading election-related content that is actively spreading and causing harm to the democratic process and public trust, fulfilling the criteria for an AI Incident. The harm is realized and ongoing, as evidenced by the detection of AI-generated fake news and manipulated media that confuse voters and fuel conspiracy theories. The involvement of AI in creating this disinformation is explicit, and the harms include violations of rights and harm to communities. Therefore, this is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

AI disinfo tests South Korean laws ahead of local elections

2026-05-07
eNCAnews
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly used to create false and misleading election content, which has directly led to harm by confusing voters and damaging trust in the electoral process. The involvement of AI in generating disinformation and the resulting enforcement actions under the 2023 law demonstrate that the AI system's use has caused realized harm to communities and democratic processes. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

AI disinfo tests South Korean laws ahead of local elections - Taipei Times

2026-05-07
Taipei Times
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated disinformation content that is actively spreading and influencing public perception ahead of elections, which constitutes harm to communities and democratic processes. The involvement of AI systems in generating false videos, audio, and other content is clear, and the harm is realized, not just potential. The strengthened laws and government efforts to detect and remove such content confirm the recognized harm. Hence, this is an AI Incident due to the direct and ongoing harm caused by AI-generated election disinformation.
Thumbnail Image

AI disinfo tests South Korean laws ahead of local elections

2026-05-07
The Anniston Star
Why's our monitor labelling this an incident or hazard?
The article involves AI systems generating disinformation, which is a recognized form of harm to communities and democratic integrity. However, the description centers on the detection and concern about AI-generated disinformation rather than confirmed incidents of harm or legal violations. Since the harm is plausible and the event concerns the potential for AI-driven disinformation to impact elections, this fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

AI disinfo tests South Korean laws ahead of local elections

2026-05-07
Iraqi News
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated disinformation content that is actively spreading and influencing public perception ahead of elections, which harms the democratic process and community trust. The involvement of AI systems in creating sophisticated fake content that misleads voters and the government's response to remove such content under legal frameworks confirms direct harm. This fits the definition of an AI Incident as the AI system's use has directly led to harm to communities and violations of rights related to election integrity.
Thumbnail Image

AI disinfo tests South Korean laws ahead of local elections

2026-05-07
RTL Today
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating false and manipulated content that is actively disseminated during election periods, directly harming the democratic process and public trust, which are harms to communities and violations of rights. The article details realized harm from AI-generated disinformation, not just potential harm, and describes the government's response to this ongoing issue. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

La Corée du Sud en pleine traque à la désinformation par IA avant des élections

2026-05-07
DH.be
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated videos, audio, and other content used to spread false information during elections, which is a direct harm to communities and the democratic process. The presence of AI systems is clear, as the disinformation is created by generative AI technologies (deepfakes, AI-generated songs, fake reports). The harm is realized, not just potential, as these contents are actively circulating and influencing public opinion. The government's efforts to detect and block such content confirm the ongoing nature of the incident. Hence, this is an AI Incident involving the use of AI systems to produce harmful disinformation.
Thumbnail Image

La Corée du Sud en pleine traque à la désinformation par IA avant des élections

2026-05-07
Medias24 - Site d'information
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated disinformation content that is already being disseminated on social media, causing harm to the electoral process and public trust, which constitutes harm to communities. The use of AI systems to create realistic fake videos and audio that mislead voters is a direct cause of this harm. Therefore, this event qualifies as an AI Incident because the AI system's use has directly led to harm. The article also discusses government responses, but the primary focus is on the ongoing harm caused by AI-generated disinformation.
Thumbnail Image

La Corée du Sud en pleine traque à la désinformation par IA avant des élections : Actualités - Orange

2026-05-07
Orange Actualités
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly generating disinformation content that is actively spreading on social media, which harms communities by misleading voters and threatening election integrity. The government's detection and blocking efforts confirm the presence and impact of AI-generated harmful content. The harm is realized (not just potential), as the article cites examples of AI-generated fake videos and songs influencing political perceptions. This fits the definition of an AI Incident because the AI system's use has directly led to harm to communities through disinformation.
Thumbnail Image

La R. de Corée en pleine traque à la désinformation par IA avant des élections

2026-05-07
lecourrier.vn
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating false and misleading content (deepfakes, fabricated videos, AI-generated songs) that are actively disseminated and influencing public opinion ahead of elections, causing harm to social trust and democratic processes (harm to communities). The government's use of AI detection tools and human intervention confirms the presence and impact of AI systems. Since the harm is realized and ongoing, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

La Corée du Sud en pleine traque à la désinformation par IA avant des élections

2026-05-07
TV5MONDE
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly mentioned as generating disinformation content (deepfakes, fake videos, audio, and songs) that have been detected and are actively influencing elections, which constitutes harm to communities and democratic rights. The government's response to detect and block such content confirms the presence and impact of AI-generated misinformation. The harm is realized, not just potential, as fake AI-generated content has already been disseminated and influenced public opinion. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

"Notre travail est de plus en plus difficile": avant les élections, la Corée du Sud est en pleine traque de la désinformation générée par l'IA

2026-05-08
BFMTV
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated disinformation content being spread on social media, which is directly linked to harm to communities and democratic processes. The government's recruitment of operators and use of AI detection software confirms the presence and use of AI systems. The harm is realized as the disinformation is actively circulating and influencing public perception ahead of elections. This fits the definition of an AI Incident because the AI system's use has directly led to harm to communities and violations of rights related to election integrity. The article does not merely warn of potential harm but describes ongoing harm and responses to it.