AI Deepfake Scam Targets Hospital Director in Taiwan

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Fraudsters used AI deepfake technology to create convincing fake videos of Changhua Christian Hospital director Chen Mu-Kuan, falsely endorsing medical products. The deepfakes misled both staff and the public, causing financial and health risks. The hospital is pursuing legal action to protect its reputation and public health.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves the use of an AI system (deepfake technology) to create fraudulent videos impersonating a medical professional, leading to people being scammed and potentially harmed. The harm is realized as people have been deceived and have purchased products based on false endorsements. This fits the definition of an AI Incident because the AI system's use directly led to harm (fraud, misinformation, potential health risks).[AI generated]
AI principles
Privacy & data governanceTransparency & explainability

Industries
Healthcare, drugs, and biotechnology

Affected stakeholders
BusinessGeneral public

Harm types
Economic/PropertyPhysical (injury)Reputational

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

盛名之累!總院長陳穆寬遭AI變臉冒名代言產品 彰基將提告追究 | 聯合新聞網

2026-05-12
UDN
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (deepfake technology) to create fraudulent videos impersonating a medical professional, leading to people being scammed and potentially harmed. The harm is realized as people have been deceived and have purchased products based on false endorsements. This fits the definition of an AI Incident because the AI system's use directly led to harm (fraud, misinformation, potential health risks).
Thumbnail Image

口腔癌名醫代言護膝不忘稱「謙卑服事」 自家醫學中心員工也上勾 | 醫藥健康 | 生活 | NOWnews今日新聞

2026-05-12
NOWnews 今日新聞
Why's our monitor labelling this an incident or hazard?
The event involves AI deepfake technology (an AI system) used maliciously to create fake endorsement videos. The misuse of AI directly causes harm by misleading people, including hospital employees, into trusting and acting on false medical advice, which can harm public health and violate legal and ethical standards. Therefore, this is an AI Incident due to realized harm from AI misuse in generating deceptive content causing public and individual harm.
Thumbnail Image

慎防詐騙!AI深偽 冒用彰基總院長⁨陳穆寬影音銷售 員工也上當 - 自由健康網

2026-05-12
健康醫療
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI deepfake technology to create fake videos of a medical professional, which have been used to deceive people into buying dubious products, causing financial and health-related harm. This constitutes an AI Incident because the AI system's misuse has directly led to harm to people and communities through fraud and misinformation.
Thumbnail Image

總院長遭AI變臉冒名代言 彰基籲勿受騙 - 彰化縣 - 自由時報電子報

2026-05-12
Liberty Times Net
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (deepfake AI) to generate realistic fake videos and audio impersonating a real person. This AI-generated content has directly led to harm by deceiving people into trusting false medical advice and purchasing unknown products, which can cause financial loss and health risks. Therefore, it meets the criteria of an AI Incident due to realized harm caused by the AI system's misuse.
Thumbnail Image

彰基總院長遭偽造不實廣告 院方蒐證籲民眾勿受騙 | 地方 | 中央社 CNA

2026-05-12
Central News Agency
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI deepfake technology to fabricate videos and audio impersonating a medical professional, which has already caused people, including hospital staff, to be deceived. This constitutes direct harm through misinformation and potential health risks, as well as violations of rights and legal breaches. Therefore, it qualifies as an AI Incident due to realized harm caused by the AI system's malicious use.
Thumbnail Image

總院長遭AI換臉賣護膝!連自家員工都上當 院方聲明:都是假的

2026-05-12
新頭殼 Newtalk
Why's our monitor labelling this an incident or hazard?
The event explicitly describes the use of AI deepfake technology to impersonate a real person for fraudulent advertising, which has already caused harm through deception and financial loss. The AI system's misuse directly leads to violations of trust and harm to the community, meeting the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

影/彰基總院長遭AI假影片冒用代言 陳穆寬嚴正澄清籲民眾勿受騙 | yam News

2026-05-12
蕃新聞
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (deepfake technology) to produce fake videos that impersonate a real person and disseminate false medical information, causing harm to individuals who are misled into purchasing products or following incorrect advice. This constitutes a violation of rights and harm to communities through misinformation and fraud. Therefore, it meets the criteria of an AI Incident as the AI system's use has directly led to harm.
Thumbnail Image

連員工都上當!彰基院長陳穆寬遭AI變臉賣「護膝」 院方發聲了 | 社會 | 三立新聞網 SETN.COM

2026-05-12
三立新聞
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI deepfake systems to fabricate videos and audio impersonating a real person for fraudulent commercial purposes. This misuse of AI has directly led to harm by deceiving people, including employees, into buying products under false pretenses, which is a clear harm to individuals and communities. Therefore, it meets the criteria of an AI Incident due to realized harm caused by AI misuse.
Thumbnail Image

AI變臉盯上名醫!彰基總院長陳穆寬遭冒名賣藥 連院內員工都被騙 - 民視新聞網

2026-05-12
民視新聞網
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI deepfake technology to fabricate videos and audio of a medical professional, which has directly led to harm by misleading consumers and hospital staff, resulting in financial and potential health harm. The use of AI in this fraudulent activity meets the criteria for an AI Incident because the AI system's misuse has caused realized harm, including deception, financial loss, and legal violations. Therefore, this is classified as an AI Incident.