Deepfake Videos Defame South Korean President and First Lady

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

South Korea's regulatory agencies are acting swiftly against deepfake videos defaming President Yoon Suk-yeol and First Lady Kim Gun-hee. The manipulated videos, shown at pro-impeachment protests, have sparked legal investigations and led YouTube to remove related content due to serious defamation and human rights violations.[AI generated]

Why's our monitor labelling this an incident or hazard?

Deepfake generation is a malicious use of AI to fabricate realistic but false video content. The distribution of these deepfakes has already occurred and prompted defamation charges and regulatory blocking due to the real risk of public harm and confusion. This constitutes a realized incident of AI-driven disinformation.[AI generated]
AI principles
Respect of human rightsTransparency & explainabilityAccountabilitySafetyDemocracy & human autonomyPrivacy & data governanceRobustness & digital security

Industries
Media, social platforms, and marketingGovernment, security, and defence

Affected stakeholders
GovernmentWomen

Harm types
ReputationalHuman or fundamental rightsPublic interest

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

방심위, 윤 대통령 내외 딥페이크 영상 2건 접속차단 의결

2025-02-18
연합뉴스TV
Why's our monitor labelling this an incident or hazard?
This item centers on regulatory and legal responses to the misuse of AI-generated deepfakes rather than describing a novel harm event or warning of a future risk. It documents governance measures (access blocking and prosecution) taken to address previously reported AI misuse, fitting the definition of Complementary Information.
Thumbnail Image

방심위, 윤 대통령 부부 딥페이크 2건 차단 조치 | 한국일보

2025-02-18
한국일보
Why's our monitor labelling this an incident or hazard?
Deepfake generation is a malicious use of AI to fabricate realistic but false video content. The distribution of these deepfakes has already occurred and prompted defamation charges and regulatory blocking due to the real risk of public harm and confusion. This constitutes a realized incident of AI-driven disinformation.
Thumbnail Image

방심위, 尹대통령 내외 딥페이크 영상 2건 접속차단 - 전파신문

2025-02-18
jeonpa.co.kr
Why's our monitor labelling this an incident or hazard?
Deepfake videos are clearly generated by an AI system and were distributed to mislead the public and provoke social unrest. The content has already circulated (incident at the Gwangju rally) and led to controversy, meeting the definition of an AI Incident since it directly contributed to harmful misinformation and social confusion.
Thumbnail Image

방심위, 尹대통령 부부 딥페이크 영상 2건 접속차단

2025-02-18
inews24
Why's our monitor labelling this an incident or hazard?
The incident involves AI-generated deepfake content that was distributed online, creating misinformation and posing a clear harm to societal stability (harm to communities). The AI system’s misuse (deepfake generation) directly led to potential public disorder, and the regulator’s blocking decision responds to an ongoing, realized AI-driven harm. Therefore, this qualifies as an AI Incident.
Thumbnail Image

또 입틀막...'비상계엄 패러디' 尹 딥페이크 풍자 접속차단

2025-02-18
미디어오늘
Why's our monitor labelling this an incident or hazard?
The piece centers on a governance response—legal/regulatory proceedings and policy debate—regarding AI-generated deepfakes, rather than on any realized harm (AI Incident) or a new plausible threat (AI Hazard). It provides context on government and public reactions to AI deepfakes, fitting the definition of Complementary Information.
Thumbnail Image

방심위 '사회 혼란 야기' 윤 대통령 부부 딥페이크 영상 2건 접속 차단

2025-02-18
매일방송
Why's our monitor labelling this an incident or hazard?
While the deepfake videos themselves represent harmful AI-generated misinformation, the article’s primary focus is on the regulatory response (blocking and removal) by a government body. This is a governance measure addressing previously identified harmful content, fitting the definition of Complementary Information rather than a new AI Incident or Hazard.
Thumbnail Image

방심위, 윤석열 대통령·김건희 여사 딥페이크 영상 2건 접속 차단 결정

2025-02-18
조선비즈
Why's our monitor labelling this an incident or hazard?
The deepfake videos are AI-generated manipulated content that has been distributed and poses a direct risk of social disorder and harm to public trust. This constitutes an actual misuse of AI (deepfake generation) resulting in misinformation, meeting the criteria for an AI Incident.
Thumbnail Image

방심위, 윤대통령 내외 딥페이크 영상 2건 접속차단 의결

2025-02-18
연합뉴스TV
Why's our monitor labelling this an incident or hazard?
While the deepfake videos themselves represent AI-enabled misinformation (an AI Incident in the abstract), the article’s primary focus is on the regulator’s swift governance response—blocking access and urging early monitoring—which falls under societal/governance actions to address AI harms. Therefore it is best classified as Complementary Information.
Thumbnail Image

방심위, 윤대통령 내외 딥페이크 영상 2건 접속차단 | 연합뉴스

2025-02-18
연합뉴스
Why's our monitor labelling this an incident or hazard?
The deepfake videos were previously circulated and risked causing social disorder (an AI incident). This article’s primary focus is on the regulator’s mitigation action—blocking access—which constitutes a follow-up update on a past AI-driven misinformation incident rather than a newly emerging harm. Therefore, it is best classified as Complementary Information.
Thumbnail Image

대통령실 "탄핵 집회서 尹 부부 딥페이크 영상...강력 법적대응"

2025-02-16
이투데이
Why's our monitor labelling this an incident or hazard?
This event involves the deliberate use of an AI system (deepfake generation) to produce defamatory, non-consensual content that was shown at a public rally, causing reputational and personal rights harm. It is a concrete incident of AI misuse rather than a hypothetical risk or an update.
Thumbnail Image

부부 딥페이크 영상에대통령실 분노 금할 길 없어

2025-02-16
wowtv.co.kr
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (deepfake generation) to create and disseminate defamatory and harassing content, resulting in actual harm to the individuals’ reputations and rights. This is a realized harm rather than a hypothetical risk, so it qualifies as an AI Incident.
Thumbnail Image

대통령실·與 "탄핵 집회 尹부부 딥페이크 영상...법적대응" [종합]

2025-02-16
이투데이
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (deepfake generation) whose use directly led to a serious violation of the president’s and first lady’s personal rights and dignity. This is realized harm (defamation, non-consensual explicit imagery) caused by AI, fitting the AI Incident definition.
Thumbnail Image

대통령실, '尹부부 딥페이크 영상'에 '국가원수 모독...법적조치'

2025-02-16
서울경제
Why's our monitor labelling this an incident or hazard?
A deepfake video was generated using AI and shown at a public rally, directly infringing on the personal dignity and rights of the president and his wife. This constitutes a realized harm (defamation and non-consensual image manipulation) caused by the use of an AI system, fitting the definition of an AI Incident.
Thumbnail Image

尹 대통령 내외 딥페이크 영상 유튜브서 차단·채널 정지

2025-02-17
아시아경제
Why's our monitor labelling this an incident or hazard?
Deepfake video generation is a use of AI that directly led to reputational and personal-rights harm (a form of defamation and dignity violation). The content was posted publicly, causing real harm and prompting legal and regulatory actions. This meets the definition of an AI Incident.
Thumbnail Image

대통령실 "탄핵 집회서 윤 대통령 부부 딥페이크 영상···법적 대응"

2025-02-16
경향신문
Why's our monitor labelling this an incident or hazard?
A deepfake video is created using generative AI and was used to humiliate and defame the president and his spouse, constituting a realized harm (violation of personal rights and dignity). This meets the criteria for an AI Incident under violations of human rights and defamation caused by an AI system’s outputs.
Thumbnail Image

대통령실, 尹부부 딥페 영상에 "인격모독...법적대응"

2025-02-16
국민일보
Why's our monitor labelling this an incident or hazard?
The deepfake was produced using AI to maliciously synthesize the faces of the president and spouse and was publicly displayed, directly infringing on their personal and human rights. This is a realized harm caused by misuse of an AI system, fitting the criteria for an AI Incident.
Thumbnail Image

與 "尹 부부 딥페이크 영상...명백한 성폭력 범죄"

2025-02-16
국민일보
Why's our monitor labelling this an incident or hazard?
The creation and public screening of a deepfake sexual video uses AI to falsify and defame individuals, directly causing reputational and psychological harm and infringing on personal and human rights. This is a realized incident of AI misuse resulting in harm.
Thumbnail Image

대통령실, '尹 부부 딥페이크 영상'에 "인격모독... 강력 법적 대응" | 한국일보

2025-02-16
한국일보
Why's our monitor labelling this an incident or hazard?
An AI system (deepfake generator) was used to produce synthetic, defamatory imagery of a sitting head of state, resulting in an actual harm—violation of personal and human rights through insulting, manipulated content. This direct misuse of AI for defamation constitutes an AI Incident under human rights violations (harm category c).
Thumbnail Image

대통령실, 대통령 딥페이크 영상 법적조치

2025-02-16
전북도민일보
Why's our monitor labelling this an incident or hazard?
The deepfake video involves an AI system generating synthetic imagery of the president and first lady without consent, displayed publicly to mock and insult them. This misuse of AI directly violates personal dignity and human rights, fitting the definition of an AI Incident (rights violation through AI-generated content).
Thumbnail Image

방심위, 윤 대통령 부부 딥페이크 신속심의 예고...유튜브, 차단 나서

2025-02-17
연합뉴스TV
Why's our monitor labelling this an incident or hazard?
This item primarily reports on governance, legal, and platform actions taken in response to a previously surfaced deepfake incident, rather than describing a new AI-driven harm or potential hazard. It thus serves as complementary information updating on mitigation and enforcement measures related to an earlier AI Incident.
Thumbnail Image

"수영복에 속옷차림" 대통령 내외 딥페이크...방심위 '삭제' 예고

2025-02-17
파이낸셜뉴스
Why's our monitor labelling this an incident or hazard?
This is a realized case of harm via an AI system (deepfake) that manufactured defamatory content, violating the subjects’ rights and dignity. The AI’s misuse directly caused reputational and personal-honor harm, constituting an AI Incident.
Thumbnail Image

방심위, 윤 대통령 내외 딥페이크 영상 신속 심의한다

2025-02-17
연합뉴스TV
Why's our monitor labelling this an incident or hazard?
The deliberate creation and dissemination of a deepfake image of real individuals without consent constitutes a realized harm—violating personal rights and causing social disruption. The use of AI-generated content for defamation and in a public setting leading to official investigations and blocking measures qualifies this event as an AI Incident.
Thumbnail Image

탄핵 찬성 집회서 尹 대통령 부부 '수영복 딥페이크' 영상 송출... 법적대응 선언

2025-02-17
인사이트
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (deep learning–based deepfake generation) whose use directly led to harm: public defamation and serious insult to the president and his spouse, a violation of personal rights. This is a realized harm caused by an AI system, meeting the criteria for an AI Incident.
Thumbnail Image

대통령실·여당 "尹부부 딥페이크 법적 대응..野 함께해야"

2025-02-16
파이낸셜뉴스
Why's our monitor labelling this an incident or hazard?
The event involves the development and malicious use of an AI system (deepfake technology) that directly caused harm—defamation, psychological injury, and infringement of personal rights—triggering legal responses. This qualifies as an AI Incident.
Thumbnail Image

尹 부부 딥페이크에 與 '좌시못해' 격분한 이유는

2025-02-16
서울경제
Why's our monitor labelling this an incident or hazard?
The event describes the creation, distribution, and public screening of a deepfake pornographic video using AI face-swap technology. This non-consensual, sexualized depiction of real individuals is a form of sexual violence, defamation, and violation of personal dignity. Because the AI system’s misuse has directly caused harm, it meets the criteria for an AI Incident.
Thumbnail Image

방심위, 윤 대통령 딥페이크 영상 신속심의...유튜브서 차단

2025-02-17
매일방송
Why's our monitor labelling this an incident or hazard?
Deepfake technology—an AI system—was used to produce and circulate manipulated videos targeting the president and his spouse. The broadcast commission’s rapid review, YouTube’s blocking of the content, and legal complaints underscore that the AI-driven deepfake has already inflicted reputational and rights-based harm. Hence, this qualifies as an AI Incident.
Thumbnail Image

"尹부부 수영복 딥페이크" 광주 탄핵찬성 집회 '발칵'...대통령실 "법적대응"

2025-02-16
파이낸셜뉴스
Why's our monitor labelling this an incident or hazard?
The event involves the malicious use of a deepfake AI system to produce non-consensual manipulated content targeting a public figure. This directly caused harm by violating personal dignity and privacy rights, triggering the presidential office’s legal response. Such actualized defamation and rights infringement qualify as an AI Incident under the framework.
Thumbnail Image

대통령실, 尹부부 딥페이크 영상에 "강력한 유감"

2025-02-16
pressian.com
Why's our monitor labelling this an incident or hazard?
The incident involves the use of an AI system (deepfake face-synthesis) to produce and publicly broadcast a video that defames and violates the personal rights of the president and his spouse. The harm has materialized, meeting the definition of an AI Incident.
Thumbnail Image

대통령실 尹부부 딥페이크 영상, 국가원수 모독...법적 대응할 것 | 아주경제

2025-02-16
아주경제
Why's our monitor labelling this an incident or hazard?
The event involves the malicious use of a deepfake AI system to produce and display non-consensual, demeaning imagery of a sitting head of state and spouse. This directly harms personal rights and dignity, fitting the definition of an AI Incident (violation of human rights and defamation through AI-generated content).
Thumbnail Image

탄핵 집회서 尹 부부 딥페이크 영상...대통령실 "법적 대응" - 정치 | 기사 - 더팩트

2025-02-16
더팩트
Why's our monitor labelling this an incident or hazard?
An AI-generated deepfake video was created and publicly displayed, directly causing harm by defaming the president and infringing on personal dignity. This is a realized incident of AI misuse, not merely potential harm or contextual discussion.
Thumbnail Image

대통령실 "탄핵 집회서 尹 부부 딥페이크 영상...인격 모독, 인권 침해

2025-02-16
조선일보
Why's our monitor labelling this an incident or hazard?
The event involves the deliberate creation and public display of an AI‐generated deepfake that directly inflicts reputational and personal rights harm on real individuals. This unauthorized use of AI to produce defamatory content constitutes an AI Incident under the framework’s definition of human rights and dignity violations.
Thumbnail Image

탄핵 집회에서 윤대통령 부부 딥페이크 영상...대통령실 "법적 대응"

2025-02-16
뉴스핌
Why's our monitor labelling this an incident or hazard?
The event describes the creation and public display of a deepfake targeting the president and spouse, directly causing reputational and privacy harm through an AI system’s misuse. This is a realized harm (defamation/invasion of dignity), so it qualifies as an AI Incident rather than a potential hazard or mere background update.
Thumbnail Image

대통령실, '대통령 부부' 딥페이크 영상 관련자 법적 대응 예고

2025-02-16
매일방송
Why's our monitor labelling this an incident or hazard?
An AI system (deepfake generation) was used to create and publicly present a falsified video depicting the president and his wife. This misuse constitutes direct harm (defamation, violation of personal dignity and political rights) caused by the AI system’s outputs. The harm is realized, making this an AI incident rather than a potential hazard or merely complementary information.
Thumbnail Image

대통령실, 탄핵찬성 집회 尹딥페이크 영상에 "강력한 법적대응"

2025-02-16
연합뉴스TV
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI-generated deepfake video that has already been shown publicly, causing harm through defamation and violation of personal rights. This meets the criteria for an AI Incident under human rights infringement (c).
Thumbnail Image

방심위, 윤 대통령 내외 딥페이크 영상 신속심의 예정 | 연합뉴스

2025-02-17
연합뉴스
Why's our monitor labelling this an incident or hazard?
A deepfake is a generative AI misuse that has directly led to defamation, infringement of personality and human rights, and risks of social chaos. The incident has already occurred (video shown, complaints filed), and authorities are taking action. This meets the criteria for an AI Incident.
Thumbnail Image

대통령실, 尹부부 딥페이크 영상에 "인격모독 범죄행위...강력 법적대응"

2025-02-16
연합뉴스
Why's our monitor labelling this an incident or hazard?
An AI deepfake system was used to synthesize and publicly display manipulated likenesses of the president and spouse, causing reputational harm and infringing upon their fundamental rights. Because this is a realized harm directly enabled by an AI system, it qualifies as an AI Incident.
Thumbnail Image

대통령실·與, 尹부부 딥페이크에 "인격모독 범죄...법적 대응"(종합) | 연합뉴스

2025-02-16
연합뉴스
Why's our monitor labelling this an incident or hazard?
A deepfake (an AI system output) was intentionally produced and displayed, directly causing harm to the individuals’ dignity and rights. This is a realized harm from malicious AI use, meeting the criteria for an AI Incident under violations of human rights and personal harm.
Thumbnail Image

탄핵 찬성 집회서 尹부부 반나체 딥페이크 영상...대통령실 "인격모독 범죄" - 매일경제

2025-02-16
mk.co.kr
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI (deepfake generation) to create harmful, defamatory content and publicly display it, causing real harm to the subjects’ personal rights and reputation. This meets the definition of an AI Incident under violations of human rights and personal dignity.
Thumbnail Image

대통령실, 尹부부 딥페이크 영상에 "명맥한 모독...강력한 법적대응"

2025-02-16
아시아경제
Why's our monitor labelling this an incident or hazard?
The incident involves an AI system (deepfake technology) directly causing harm—non-consensual face synthesis of public figures leading to defamation, personal rights violations, and outrage. Actual harm occurred, so it qualifies as an AI Incident under violations of human rights and personal dignity.
Thumbnail Image

방심위, 윤 대통령 부부 딥페이크 영상 신속심의 결정

2025-02-17
경향신문
Why's our monitor labelling this an incident or hazard?
The news item centers on the Communications Standards Commission’s rapid-review decision—a governance response—to an already circulated deepfake. The underlying incident (the malicious AI-generated video) is background; the main narrative is the regulator’s investigatory action. This matches the definition of Complementary Information, as it reports follow-up/governance activity around a previously published AI incident rather than introducing a brand-new hazard or incident.
Thumbnail Image

대통령실 "광주 집회서 대통령 부부 얼굴 합성 영상 재생...법적 조치"

2025-02-16
YTN
Why's our monitor labelling this an incident or hazard?
The incident involves the use of a generative AI system (deepfake technology) to create and display manipulated images of the president and spouse, amounting to defamation, serious personal rights violations, and criminal behavior. This represents an actual harm caused by AI misuse.
Thumbnail Image

尹부부 딥페이크 영상에...대통령실 "인격모독 범죄, 법적 대응"

2025-02-16
inews24
Why's our monitor labelling this an incident or hazard?
The event involves the malicious use of an AI system (deepfake synthesis) to generate and display manipulated imagery of the president and spouse, causing defamation and human rights harms. This harm is realized rather than hypothetical, and the role of the AI system in producing the video is central to the incident. Therefore, it qualifies as an AI Incident.
Thumbnail Image

대통령실·與, 尹부부 딥페이크 영상에 "인격모독 범죄 경악"... 17일 고발 조치

2025-02-16
조선비즈
Why's our monitor labelling this an incident or hazard?
The event involves the creation and public use of a deepfake—an AI system generating manipulated, harmful content—that directly infringes on the subjects’ personal rights and constitutes a criminal act. The harm has occurred (defamation, potential sexual violence), making this an AI Incident rather than a potential hazard or mere commentary.
Thumbnail Image

서울경찰청, '尹부부 딥페이크' 사건 광주경찰로 이송 | 연합뉴스

2025-02-18
연합뉴스
Why's our monitor labelling this an incident or hazard?
The incident involves the malicious use of an AI-generated deepfake to produce non-consensual, sexually explicit content depicting real individuals. This constitutes a direct violation of personal and human rights and has led to legal action, so it meets the criteria for an AI Incident.
Thumbnail Image

'윤 대통령 부부 딥페이크' 사건 광주경찰청이 수사

2025-02-18
연합뉴스TV
Why's our monitor labelling this an incident or hazard?
This is a clear case of direct misuse of an AI system (deepfake generation) that has already occurred, resulting in legal action for violations of the law (sexual crimes special statutes) and harm to the reputation and rights of the individuals involved. Therefore, it meets the criteria for an AI Incident.
Thumbnail Image

대통령실, '尹 부부 딥페이크'에 "심각한 인격 모독...법적 대응할 것"

2025-02-16
쿠키뉴스
Why's our monitor labelling this an incident or hazard?
An AI system (deepfake image-synthesis) was knowingly used to fabricate and disseminate harmful, non-consensual content targeting the president and first lady, causing serious reputational and psychological harm. The harms have materialized (defamation, insult, human-rights infringement), triggering legal action. Therefore this is an AI Incident.
Thumbnail Image

속보대통령실 부부 딥페이크 영상 법적대응

2025-02-16
wowtv.co.kr
Why's our monitor labelling this an incident or hazard?
Deepfake generation is an AI system misuse that has already occurred and caused harm—namely, serious defamation, personal dignity and human-rights violations against the president and first lady. The incident’s core is the malicious use of an AI tool to synthesize and publicly show fabricated video content, which directly led to real reputational and emotional harm. Therefore, this event qualifies as an AI Incident rather than a future hazard, mere update, or unrelated news.