AI-Generated Deepfakes Used in Celebrity Scam Ads in Taiwan

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Taiwanese celebrity Chen Meifeng's likeness and voice were fraudulently replicated using AI to create fake advertisements promoting products and investment scams. The AI-generated deepfakes deceived consumers, particularly targeting the elderly, leading to financial harm and violation of image and voice rights. Chen publicly warned fans and called for stricter AI regulations.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article describes a scam where AI-generated synthetic media (deepfake videos and voice) of a celebrity is used to defraud fans and the public. This constitutes direct harm to people through financial fraud and deception, fulfilling the criteria of an AI Incident. The AI system's use is central to the harm, as it enables the creation of highly realistic fake content that misleads victims.[AI generated]
AI principles
AccountabilityPrivacy & data governanceRespect of human rightsTransparency & explainabilitySafetyDemocracy & human autonomy

Industries
Media, social platforms, and marketing

Affected stakeholders
ConsumersWomen

Harm types
Economic/PropertyHuman or fundamental rightsReputational

Severity
AI incident

Business function:
Marketing and advertisement

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

陳美鳳臉和聲音被盜用!詐團用AI合成假廣告 急發文澄清:別被騙 | 娛樂 | NOWnews今日新聞

2025-09-18
NOWnews 今日新聞
Why's our monitor labelling this an incident or hazard?
The article describes a scam where AI-generated synthetic media (deepfake videos and voice) of a celebrity is used to defraud fans and the public. This constitutes direct harm to people through financial fraud and deception, fulfilling the criteria of an AI Incident. The AI system's use is central to the harm, as it enables the creation of highly realistic fake content that misleads victims.
Thumbnail Image

遭「AI盜臉與聲音」賣產品 陳美鳳氣憤發聲

2025-09-18
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI technology to generate fake facial images and voice recordings of a public figure, which were then used in fraudulent advertising to sell products. This misuse of AI directly leads to harm by enabling scams and misleading consumers, which fits the definition of an AI Incident due to harm to individuals and communities through deception and fraud.
Thumbnail Image

詐騙廣告!肖像被盜用.聲音遭AI造假 陳美鳳:不要上當

2025-09-19
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
The use of AI to create a fake voice for fraudulent advertisements directly leads to harm by deceiving consumers, which falls under harm to communities or individuals. The AI system's misuse is central to the incident, as it enables the scam. Therefore, this qualifies as an AI Incident due to realized harm caused by AI-generated fake voice content used in scams.
Thumbnail Image

陳美鳳罕見發火怒罵 曝「AI偽本人」造假賣保養品 - 自由娛樂

2025-09-18
自由時報電子報
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI technology to fake the celebrity's likeness and voice to sell products fraudulently, which is a direct misuse of AI systems causing harm. The harm includes deception of consumers (harm to communities) and violation of the celebrity's rights (image and voice used without consent). This meets the criteria for an AI Incident because the AI system's use has directly led to realized harm through fraud and reputational damage.
Thumbnail Image

陳美鳳火大了!遭AI偽冒賣保養品 親揭「辨認詐騙1關鍵」

2025-09-18
自由時報電子報
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI to generate a fake likeness and voice of a public figure to promote and sell counterfeit products, which is a clear case of AI misuse causing harm. The harm includes deception of consumers and potential financial and reputational damage, fitting the definition of an AI Incident under violations of rights and harm to communities. The AI system's use in creating the fake content directly led to the harm, fulfilling the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

網路驚見「AI陳美鳳」賣保養品! 本尊震怒截圖:連聲音都造假 | ETtoday星光雲 | ETtoday新聞雲

2025-09-18
ETtoday星光雲
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system used maliciously to create fake audiovisual content impersonating a person without consent, which constitutes a violation of personal rights (including image and voice rights) and is used in a scam context to mislead consumers. This misuse has directly led to harm by enabling fraud and violating the individual's rights, fitting the definition of an AI Incident under violations of human rights and breach of applicable law protecting fundamental rights.
Thumbnail Image

陳美鳳怒「詐騙連聲音都能做假」 呼籲多注意長輩別受騙 | ETtoday星光雲 | ETtoday新聞雲

2025-09-18
ETtoday星光雲
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions that scammers are using AI to generate fake videos and voices of the celebrity, which is a direct misuse of AI systems leading to potential harm through fraud and deception. This constitutes a violation of rights and harm to communities by enabling scams. Since the harm is occurring or has occurred (people being targeted or at risk), this qualifies as an AI Incident.
Thumbnail Image

連聲音都能做假!陳美鳳怒斥AI詐騙 憂心長輩受騙成肥羊 | 噓!星聞

2025-09-18
聯合新聞網 udn.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to generate fake voices and images for fraudulent purposes, which directly leads to harm by enabling scams and deception. The harm includes violation of rights (e.g., misuse of likeness and voice), financial harm to victims, and harm to communities through widespread fraud. Therefore, this qualifies as an AI Incident due to the realized harm caused by the malicious use of AI-generated content.
Thumbnail Image

陳美鳳氣炸!AI偽造聲音影片販售商品 怒轟詐騙:真的氣噗噗

2025-09-18
中時新聞網
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI-generated synthetic videos and voice impersonations used to promote fraudulent products, causing direct harm through scams. The AI system's misuse in creating these fake representations is central to the incident, fulfilling the criteria for an AI Incident due to violations involving deception and harm to consumers. Therefore, this event is classified as an AI Incident.
Thumbnail Image

陳美鳳罕震怒發聲!驚見「AI版自己」賣保養品 斥責:連聲音都造假 | 娛樂星聞 | 三立新聞網 SETN.COM

2025-09-18
三立新聞
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI-generated version of a person being used to sell products fraudulently, including fake voice synthesis. This is a direct misuse of AI technology causing harm to people (scam victims) and the individual impersonated. The harm is realized, not just potential, as the scam is actively occurring. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to harm through fraudulent impersonation and deception.
Thumbnail Image

陳美鳳震怒!聲音遭AI造假「7天年輕10歲」急呼籲一件事|壹蘋新聞網

2025-09-18
Nextapple
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to create fake voice content impersonating the celebrity, which is then used in scam advertisements. This involves an AI system's use leading directly to harm: fraud against consumers and violation of the celebrity's rights. The harm is realized, not just potential, as the scam advertisements are active and the celebrity is warning the public. Hence, it meets the criteria for an AI Incident involving violation of rights and harm to communities through fraud.
Thumbnail Image

"AI陳美鳳"賣劣質保養品 本人氣炸怒嗆業者賺黑心財

2025-09-19
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
The event describes the use of AI to generate a fake advertisement that misleads consumers into buying poor-quality products, which constitutes harm to consumers and a violation of rights (e.g., unauthorized use of likeness and deceptive marketing). The AI system's use directly led to this harm, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, as the fake ads are actively used to sell products.
Thumbnail Image

陳美鳳被詐騙集團盜用影音 今現身仍努力募集1500萬元 | ETtoday星光雲 | ETtoday新聞雲

2025-09-19
ETtoday星光雲
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI technology being used by scammers to forge voice and video content, which is then used to deceive consumers. This misuse of AI has directly led to harm through fraud and potential health risks from fake products. Therefore, this qualifies as an AI Incident because the AI system's use has directly caused harm to individuals and communities through deception and potential physical injury.
Thumbnail Image

陳美鳳不捨江祖平:她應該很難過!轟詐騙集團利用自己轉黑心錢 | 娛樂 | NOWnews今日新聞

2025-09-19
NOWnews 今日新聞
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI technology to create fake voice and image content (deepfakes) by a scam group, which directly leads to harm by deceiving consumers and causing financial loss. This fits the definition of an AI Incident because the AI system's use has directly led to harm (fraud and deception).
Thumbnail Image

聲音遭AI造假!陳美鳳今晚再發火 「這種行為就是詐騙」|壹蘋新聞網

2025-09-19
Nextapple
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated fake voice used in scam advertisements, which is a direct misuse of AI technology causing harm to individuals and consumers. The harm includes fraud and potential health risks, fulfilling the criteria for an AI Incident. The AI system's use in creating fake voices for deceptive purposes directly leads to realized harm, not just a potential risk.
Thumbnail Image

免費講座:手機App便利多 謹防AI詐騙 | Ai詐騙 | 防止AI詐騙 | 人工智能 | 大紀元

2025-09-21
The Epoch Times
Why's our monitor labelling this an incident or hazard?
The article describes a planned free lecture that introduces AI concepts and practical uses of mobile apps, with a focus on educating attendees about AI scams to prevent harm. There is no report of an actual AI incident or harm occurring, nor is there a direct or indirect AI system malfunction or misuse causing harm. The event is primarily educational and preventive, providing complementary information to the public about AI risks and safety measures.
Thumbnail Image

華打擊跨國電騙 6.8萬境外疑犯歸案

2025-09-20
on.cc東網
Why's our monitor labelling this an incident or hazard?
The article mentions AI as part of the criminals' upgraded methods in telecom fraud, indicating AI system involvement in the use phase. However, it does not detail a specific AI incident causing realized harm but rather describes a general threat and ongoing law enforcement cooperation. This fits the definition of Complementary Information, as it provides context and updates on societal and governance responses to AI-related crime without reporting a distinct AI Incident or AI Hazard event.
Thumbnail Image

詐騙猖獗 銀行導入高科技治人頭戶

2025-09-20
工商時報
Why's our monitor labelling this an incident or hazard?
The banks are using AI systems (e.g., anomaly detection models, '雷神識詐模型') to identify and control fraudulent accounts and transactions. The article focuses on the deployment and use of these AI systems to prevent harm (fraud) to customers and financial institutions. However, the article does not report any realized harm caused by AI malfunction or misuse; rather, it describes proactive use of AI to reduce harm. Therefore, this is not an AI Incident or AI Hazard but rather complementary information about AI deployment and governance in banking fraud prevention.
Thumbnail Image

北港鎮長盃熱唱開賽 防詐遊戲宣導吸睛 | yam News

2025-09-21
蕃新聞
Why's our monitor labelling this an incident or hazard?
While the article mentions AI technologies like AI voice and deepfake being used in scams, it does not report any actual incident of harm caused by AI systems. The focus is on raising awareness and prevention through education, which is a societal response to potential AI-related threats. Therefore, this is Complementary Information as it provides context and updates on AI-related crime prevention without describing a new AI Incident or AI Hazard.