First Legal Action in South Korea Against Deepfake Election Misinformation

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Ahead of the June 3 local elections in South Korea, an election candidate was the first to be reported to police for creating and distributing an AI-generated deepfake video containing false information to mislead voters. Authorities imposed fines and initiated legal action under newly established laws targeting AI-driven election misinformation.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event explicitly involves an AI system used to produce deepfake videos, which are AI-generated synthetic media. The use of such AI-generated content to spread false information about a candidate constitutes a violation of legal rights and harms the fairness of the election process, which falls under violations of human rights and breach of legal obligations. Since the harm (misinformation affecting election integrity) has already occurred and legal actions are underway, this qualifies as an AI Incident rather than a hazard or complementary information.[AI generated]
AI principles
Democracy & human autonomyTransparency & explainability

Industries
Government, security, and defence

Affected stakeholders
General public

Harm types
Public interest

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

선관위, AI 딥페이크 영상 유포자에 허위사실 공표 첫 고발 | 연합뉴스

2026-02-09
연합뉴스
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system used to produce deepfake videos, which are AI-generated synthetic media. The use of such AI-generated content to spread false information about a candidate constitutes a violation of legal rights and harms the fairness of the election process, which falls under violations of human rights and breach of legal obligations. Since the harm (misinformation affecting election integrity) has already occurred and legal actions are underway, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

선관위, AI 딥페이크 유포 지방선거 예비후보 첫 고발...과태료 500만원 부과도

2026-02-09
경향신문
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used to create deepfake videos that spread false information about election candidates, which is a direct violation of election laws and harms the integrity of the electoral process. The Central Election Commission's legal action and imposition of fines confirm that harm has occurred due to the AI system's use. The harm includes misinformation affecting voters' ability to make informed decisions, which is a harm to communities and a breach of legal protections. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

'딥페이크 영상' 울산 지선 출마예정자 고발

2026-02-09
������
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system used to create deepfake videos, which were then used to spread false information about a candidate, directly harming the integrity of the election process and potentially misleading the public. This constitutes a violation of legal rights and harms the community by undermining democratic processes. Since the AI system's use directly led to realized harm (false information dissemination affecting election fairness), this qualifies as an AI Incident.
Thumbnail Image

[사설] 우려가 현실로...'가짜 영상'으로 선거 운동이라니

2026-02-09
������
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of deepfake AI technology to produce and spread false election campaign videos, which misled voters and violated election laws. This misuse of AI directly caused harm to the democratic process and voter trust, fulfilling the criteria for an AI Incident. The involvement of AI in generating deceptive content that influenced public opinion and election fairness is clear and direct. Therefore, this event is classified as an AI Incident.
Thumbnail Image

6.3지방선거 앞두고 딥페이크영상 이용 허위사실 공표자 첫 고발

2026-02-09
데일리중앙
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system to create deepfake videos that spread false information in the context of an election, which is a violation of election laws and harms the integrity of the electoral process. The AI-generated content was used maliciously to mislead voters, directly causing harm to communities and violating legal frameworks protecting fair elections. The legal action and penalties confirm that harm has materialized. Hence, this is classified as an AI Incident.
Thumbnail Image

[6·3 지선] 중앙선관위 "딥페이크 영상 이용 허위사실공표자 첫 고발"

2026-02-09
뉴스핌
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (deepfake video generation using AI) whose use has directly led to the dissemination of false information in an election context, causing harm to the community by potentially misleading voters and undermining election integrity. Legal authorities have taken action against the individual responsible, indicating recognized harm and violation of law. Therefore, this qualifies as an AI Incident due to the realized harm caused by the AI system's use in spreading misinformation with significant societal impact.
Thumbnail Image

"美타임지 선정" 딥페이크 영상 선거홍보...선관위 첫 고발 | 중앙일보

2026-02-09
중앙일보
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (deepfake video generation) used to produce and distribute false election propaganda. This use directly led to a violation of election laws and the potential harm of misleading voters, which is a harm to communities and a breach of legal obligations protecting electoral integrity. The authorities' detection and legal action confirm that harm has materialized. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use directly led to harm and legal violations.
Thumbnail Image

선거철, 'AI 딥페이크' 유포 막아라"...선관위, 관련법 신설 후 첫 고발 - 시사저널

2026-02-09
시사저널
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to create deepfake videos that spread false information about a candidate, which is a direct misuse of AI technology causing harm to the electoral process and voters' rights. The legal response and penalties confirm that harm has materialized. Therefore, this event qualifies as an AI Incident because the AI system's use directly led to violations of rights and harm to communities through misinformation dissemination during an election.
Thumbnail Image

''타임지가 선정한 인물''...딥페이크로 유권자 속이려다 덜미

2026-02-09
매일방송
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI to produce a deepfake video, which is an AI system generating manipulated audiovisual content. The use of this AI system directly led to the dissemination of false information aimed at influencing voters, constituting harm to communities and a violation of legal rights related to fair elections. The legal consequences and penalties confirm that harm has materialized. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use directly caused harm through misinformation and election interference.