AI Deepfake Videos Impersonate Doctor, Spread Harmful Medical Misinformation in South Korea

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

AI-generated deepfake videos impersonating Dr. Lee Guk-jong circulated widely on YouTube, spreading unverified and potentially dangerous medical advice to hundreds of thousands of viewers. The misuse of AI led to public misinformation, risked health harm, and violated the doctor's personal rights, prompting legal and privacy complaints.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves AI systems used to create deepfake videos impersonating a real person, which directly led to harm by spreading false medical information that could endanger viewers' health (harm to persons). Additionally, the use of the professor's image and voice without consent constitutes a violation of personal rights. Therefore, this qualifies as an AI Incident because the AI system's use directly caused harm through misinformation and rights violations.[AI generated]
AI principles
SafetyRespect of human rights

Industries
Healthcare, drugs, and biotechnologyMedia, social platforms, and marketing

Affected stakeholders
General publicWorkers

Harm types
Physical (injury)Human or fundamental rightsReputational

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

이국종 교수의 건강정보?...딥페이크 영상이었다

2026-03-27
채널A
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used to create deepfake videos impersonating a real person, which directly led to harm by spreading false medical information that could endanger viewers' health (harm to persons). Additionally, the use of the professor's image and voice without consent constitutes a violation of personal rights. Therefore, this qualifies as an AI Incident because the AI system's use directly caused harm through misinformation and rights violations.
Thumbnail Image

"이국종 교수가 알려준 건강정보, 허구였다"...딥페이크 사칭 계정 논란 - 매일경제

2026-03-28
mk.co.kr
Why's our monitor labelling this an incident or hazard?
The article describes a deepfake account using AI to impersonate a medical professional and spread medically unverified and potentially dangerous health advice. The AI system's misuse directly leads to harm by misleading viewers, which can endanger their health and lives. The presence of the AI system (deepfake technology) is explicit, and the harm (health misinformation causing potential injury or death) is direct and materialized. Hence, this is an AI Incident.
Thumbnail Image

"이국종 교수님이 유튜브를?" 68만명 속았다

2026-03-27
문화일보
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI-based deepfake technology to impersonate a medical expert and disseminate false medical advice. The misinformation has been widely viewed and accepted by many, posing a direct risk to people's health if they act on the incorrect guidance. The AI system's misuse has directly led to harm by misleading the public about critical health information, fulfilling the criteria for an AI Incident under harm to health and communities.
Thumbnail Image

유튜브 점령한 가짜 이국종?...100만명 속인 AI 딥페이크

2026-03-26
연합뉴스TV
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (deepfake technology) used to create synthetic videos impersonating a real person. The misuse of this AI system has directly led to harm by spreading misleading and unverified medical advice to a large audience, which can cause health harm and misinformation to communities. The impersonation also infringes on privacy rights. These harms fall under the definitions of injury or harm to health and violations of rights. Hence, the event meets the criteria for an AI Incident.
Thumbnail Image

"이국종 교수가 자꾸 '백십구'에 전화하라고"...119로 알아챈 AI 딥페이크, 나흘만에 67만 조회

2026-03-27
파이낸셜뉴스
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system generating deepfake audio and video impersonating a doctor to disseminate false medical advice. This misinformation can cause injury or harm to health if acted upon, which is a direct harm under the AI Incident definition. The widespread dissemination and public acceptance of the false information further amplify the risk. Therefore, this qualifies as an AI Incident due to the direct link between AI misuse and potential health harm.
Thumbnail Image

이국종 교수가 유튜브를? 가짜 AI 의사 주의보

2026-03-27
health.chosun.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly used to generate synthetic voice and images to impersonate a real person, which directly leads to harm by spreading misinformation and damaging trust in medical professionals. The harm includes violation of rights (reputation, economic interests), potential defamation, and social harm to communities through misinformation. The AI system's use is central to the incident, fulfilling the criteria for an AI Incident rather than a hazard or complementary information.