Celebrities Targeted by Deepfake Fraud

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Multiple investigations reveal the misuse of AI deep synthesis technology to fabricate celebrity audio and video content, impersonating figures like Lei Jun, Liu Dehua, and Dr. Zhang Wenhong for fraudulent marketing and profit. Legal experts stress that unauthorized deepfakes violate intellectual property and personal rights, calling for stricter enforcement.[AI generated]

Why's our monitor labelling this an incident or hazard?

The misuse of AI deepfake systems for impersonating public figures and ordinary individuals has directly led to financial and personal harms (fraud, privacy and portrait rights violations, reputational damage). These are realized harms caused by AI system use, fitting the definition of an AI Incident.[AI generated]
AI principles
AccountabilityPrivacy & data governanceRespect of human rightsRobustness & digital securityTransparency & explainabilitySafety

Industries
Media, social platforms, and marketingDigital security

Affected stakeholders
ConsumersOther

Harm types
ReputationalHuman or fundamental rightsEconomic/PropertyPsychological

Severity
AI incident

Business function:
Marketing and advertisement

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

惊!AI换脸只需9.9元 AI诈骗暴增3000% 警方:单案损失超百万

2025-03-16
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The misuse of AI deepfake systems for impersonating public figures and ordinary individuals has directly led to financial and personal harms (fraud, privacy and portrait rights violations, reputational damage). These are realized harms caused by AI system use, fitting the definition of an AI Incident.
Thumbnail Image

网上AI换脸软件可轻松找到,有商家宣称"100%逼真"

2025-03-14
finance.sina.com.cn
Why's our monitor labelling this an incident or hazard?
This describes actual misuse of generative AI (face-swap and voice-clone software) to produce illicit, non-consensual explicit content. The AI’s development and use have directly led to violations of personal rights and reputational harm, qualifying it as an AI Incident.
Thumbnail Image

AI换脸软件包月30元可无限次使用 滥用风险引热议

2025-03-14
news.china.com
Why's our monitor labelling this an incident or hazard?
An AI system (face-swap/deepfake software) was directly used to create illicit content without consent, causing harm to the individual’s rights and reputation. This constitutes a realized harm under AI Incident criteria (violation of fundamental and personal rights).
Thumbnail Image

网上定制一张AI脸只需1980元

2025-03-14
t.cj.sina.com.cn
Why's our monitor labelling this an incident or hazard?
No specific misuse or harm has been reported yet, but the service clearly enables creation of undetectable deepfakes that could be used for fraud, impersonation, defamation or other malicious ends. This represents a plausible risk of significant harm if deployed, making it an AI Hazard.
Thumbnail Image

"2亿诈骗案"一年后,AI换脸犯罪更疯狂了

2025-03-13
tech.ifeng.com
Why's our monitor labelling this an incident or hazard?
The deepfake services described (custom AI face‐swap and voice synthesis) were actively used to perpetrate scams—most notably the 200 million HKD fraud via a fake video conference and subsequent cases where employees transferred hundreds of thousands of yuan. These are direct harms caused by the misuse of AI systems, fitting the definition of AI Incidents.
Thumbnail Image

315 | 刘德华打来了视频?是1980元的AI换脸骗你呢

2025-03-14
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The piece describes multiple instances where AI‐generated fake video calls and livestreams have directly led to fraud (elderly victims defrauded), infringement of celebrities’ likenesses, and harm to consumers. This constitutes realized harm caused by AI misuse, fitting the definition of an AI Incident.
Thumbnail Image

"2亿诈骗案"一年后,AI换脸犯罪更疯狂了

2025-03-13
finance.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The piece documents actual harms—large-scale financial fraud—directly enabled by AI face-swapping (DeepFake) technology. These events represent realized AI-driven incidents causing property and economic harm.
Thumbnail Image

整治AI"换脸拟声":新技术引发的新问题,需要新思维解决

2025-03-14
news.bjd.com.cn
Why's our monitor labelling this an incident or hazard?
Although the article recounts actual incidents of deepfake scams and impersonations, its primary purpose is to describe and analyze governance proposals—legislative and technical responses—to the broader problem of AI-generated face/voice forgeries. This constitutes a societal and policy response rather than reporting a new AI incident or describing a potential hazard. Therefore, it falls under Complementary Information.
Thumbnail Image

AI换脸杀进带货圈眼见也不为实

2025-03-15
finance.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event describes real‐world misuse of AI (deepfake face replacement) to commit fraud in e‐commerce. Consumers suffer property/financial losses due to the malicious AI application. This is a realized harm directly linked to an AI system’s misuse, so it qualifies as an AI Incident.
Thumbnail Image

治理AI"换脸拟声"需要新技术新思维

2025-03-15
finance.sina.com.cn
Why's our monitor labelling this an incident or hazard?
Although the article describes real harms from AI deepfakes (celebrity impersonation scams), its primary focus is on governance responses—legislative calls, new‐thinking approaches, and regulatory innovations—rather than reporting a new incident or hazard. Thus it provides contextual, policy‐oriented Complementary Information about ongoing efforts to address those harms.
Thumbnail Image

刘德华打来视频?是AI换脸骗你呢

2025-03-16
finance.sina.com.cn
Why's our monitor labelling this an incident or hazard?
This is a clear case of an AI system’s use (deepfake face-swap and voice-clone technology) directly causing harm (fraud and financial loss). The harms have already materialized, making this an AI Incident.
Thumbnail Image

用AI合成出售名人音视频是否侵权?

2025-02-16
finance.sina.com.cn
Why's our monitor labelling this an incident or hazard?
This piece provides legal interpretation and warning about AI deepfake misuse rather than reporting a specific newly occurring incident or outlining a future risk scenario. It serves to update readers on laws and enforcement perspectives, making it Complementary Information.
Thumbnail Image

雷军、刘德华、张文宏成受害者,揭露AI合成名人音视频乱象

2025-02-16
news.ifeng.com
Why's our monitor labelling this an incident or hazard?
The misuse of AI deepfake tools is directly causing harm by enabling scams, infringing celebrity image and intellectual property rights, and deceiving consumers. These are realized harms stemming from the development and use of AI systems for malicious purposes, fitting the definition of an AI Incident.
Thumbnail Image

雷军、刘德华成受害者 揭露AI合成名人音视频乱象

2025-02-16
news.sina.com.cn
Why's our monitor labelling this an incident or hazard?
This is a case of actual, unauthorized use of AI systems (deepfake audio/video generators) directly resulting in harm: violations of individuals’ personality and voice rights (civil and intellectual property rights), and facilitating scams. Since the AI’s misuse has led to realized legal and reputational harms, it constitutes an AI Incident.
Thumbnail Image

雷军、刘德华成受害者 央视揭露AI合成名人音视频乱象

2025-02-16
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
Rather than reporting a single new incident or forecasting a specific future threat, the piece summarizes multiple infringement cases, ongoing governmental regulations, recent court rulings and expert recommendations for detecting and preventing AI deep-fake abuses. Its primary focus is on legal and policy responses and broader context, making it Complementary Information.
Thumbnail Image

"一键换声"绝不可任性

2025-02-14
GuangZhou Morning Post
Why's our monitor labelling this an incident or hazard?
AI voice‐cloning systems are explicitly described and have already been used maliciously in multiple scam cases (e.g., impersonating a grandson to defraud an elderly person of tens of thousands of yuan). This constitutes an AI system’s misuse causing direct harm to individuals, meeting the criteria for an AI Incident.
Thumbnail Image

雷军、刘德华都成受害者!最新调查揭露AI工具被滥用

2025-02-17
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article describes concrete instances where AI deepfake tools have been used to impersonate public figures (Lei Jun, Andy Lau) and ordinary people for fraudulent schemes and unauthorized promotion. These actions constitute infringements of personal rights and have been used to scam consumers, representing direct harm caused by the use of AI systems. Thus, this is an AI Incident.
Thumbnail Image

雷军、刘德华成受害者 央视揭露AI合成名人音视频乱象

2025-02-16
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
This describes actual harms—unauthorized AI-generated celebrity voices and face-swap videos used for fraud, infringement of likeness and voice rights, and resulting legal violations. The AI systems’ use directly led to these violations, fitting the definition of an AI Incident (violations of human and intellectual property rights).
Thumbnail Image

AI声音滥用亟需警惕和治理

2025-02-16
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
While the text cites real instances of voice-cloning abuse and a recent court decision, its primary focus is on summarizing the phenomenon and advocating for policy, regulatory, and technical responses rather than reporting a single, discrete harm event. This aligns with ‘Complementary Information,’ since it offers contextual background, governance discussion, and legal developments regarding AI voice cloning.
Thumbnail Image

从恶搞企业家雷军到AI制作张文宏医生音视频为自己带货,过去一年,AI深度合成音视频侵权现象愈演愈烈。 此前就曾有网友曾利用AI制作的刘德华声音,为自己博取流量,刘德华电影公司还紧急发布声明提醒网民,不要落入伪造刘德华声音的AI语音合成技术骗局。 据央视新闻近日调查发现,实现AI深度合成音视频并不算难事,甚至在一些购物平台,AI深度合成技术已经成为众多网店牟利的工具,只需花费几十元,就可定制AI深度合成名人音视频。 这些AI深度合成的声音和换脸是如何制作出来的?央视新闻记者调查发现,相应的深度合成软件在网络上下载并不算难,而且还有一些网络主播会开直播教网友如何使用。 央视新闻记者在手机应用商城搜索换脸换声,会出现众多工具。不过记者发现换脸工具,设定了固定的场景,且从效果看逼真度不高。那么网店制作的逼真度高的AI深度合成的声音和换脸是如何做到的? 中国网络空间安全协会人工智能安全治理专业委员会专家薛智慧表示,当前有很多开源软件和工具可以实现这种换声换脸的效果,可以供网民自由下载使用。不过不论换声还是换脸软件,不同的场景需要调整相应的参数,步骤较多,属于半专业软件,有一定的技术门槛,所以不被网民熟知。但是记者发现目前在短视频平台有众多主播在教网民使用相关软件。 21世纪经济报道记者此前曾实测了一款个性化定制的文生音模型,198元就可以定制一个角色,千元出头能定制6个角色。该产品页面上,已经看到了杨幂、刘亦菲、丁真、成龙等人的声音,只要像加入购物车一样,把克隆声音添加到角色库里即可。 在使用上,如果是"瞬时克隆",用户只需要上传5~8秒的声音样本;如果是精度更高的"专业克隆",需要1分钟~60分钟的训练素材。【详情:刘亦菲、杨幂、成龙等明星声音"瞬间克隆"?业内人士惊了!】 相关法律法规明确规定,未经授权,不得深度合成制作发布他人信息。网店接单AI深度合成制作名人视频,已经属于侵权行为,并应该承担相应的法律责任。 网店使用AI深度合成技术制作名人的视频,应该承担什么样的法律责任? 北京航空航天大学法学院副教授赵精武表示,在司法实践中,国内已经审结了首例AI生成声音人格权侵权案。在这个案子里,原告作为配音演员,在没有经过授权的情况下,他的声音被AI生成后对外出售。法院在审理中认为,我国的民法典已经在人格权编,将自然人的声音视为一种人格权益,具有人身专属性。所以,法院最终认定被告的行为构成侵权。 不仅是名人,普通人甚至动画形象,如果未经本人或版权方同意,就用AI合成制作其音视频,也可以判断为侵权行为,应当承担法律责任。 去年10月,香港警方表示捣毁了一个诈骗窝点,诈骗者利用深度伪造技术换脸成年轻女性,诱使受害者投资虚拟货币产品,涉案金额超过3.6亿港元。 联合国毒品和犯罪问题办公室(UNODC)曾在一份142页的报告中,发出了严厉警告 -- -- 东南亚地区的诈骗者正在利用生成式AI和深度伪造技术,来扩大诈骗行动的规模和有效性。 这份报告提供了迄今为止最明确的证据,比如去年亚太地区的深度伪造事件增加了1530%;近半年的监测数据显示,暗网Telegram上面向诈骗团伙的深度伪造产品,增加了600%以上。 杀猪盘的国内社区讨论里,已经有不少人表示自己遇到了主动要求视频的"盘哥""盘姐"。以前他们只能播放提前录制的视频,无法正常对话,现在嘴形和声音都能匹配。 国投智能首席科学家、福建省电子数据存取证重点实验室主任江汉祥向21世纪经济报道记者分析,杀猪盘用AI视频换脸,的确相对容易,做到真假难辨没有问题。 随着相关技术的普及,我们该如何防范AI深度合成音视频侵权的问题? 中国网络空间安全协会人工智能安全治理专业委员会专家薛智慧指出,从技术角度来说,可以利用AI技术来对抗跟检测AI。当前也有一些典型的AI技术,能够对这些图片或者音视频进行检测,来判断或者分析这些图片是否后期加工和合成。 南方财经全媒体记者专访了中国网络空间安全协会人工智能安全治理专业委员会委员、天融信科技集团助理总裁张博,张博表示有"三招"识破套路: 第一是提高自身的安全风险意识,"AI换脸"可能跟交易诈骗场景结合出现,所以在涉及提供个人信息、转账汇款等敏感场景时,应保持高度警惕,以防受骗。 第二是在接收到视频通话等视频信息时,要仔细观察对方视频中的光线背景、面部轮廓等细节是否自然,如有必要,可以要求对方进行快速的抬头、低头、左右转头等动作,观察画面是否存在异常,进一步验证视频的真实性。 第三,在面临转账汇款等金融交易时,可通过其他相对可靠的方式,例如亲自打电话等确认对方身份,以确保交易信息无误,避免因轻信虚假信息而遭受损失。 中国计算机学会计算机安全专业委员会委员吕延辉表示,防范AI侵权问题需要从法律、平台和公众等多个层面综合施策。法律层面,要进一步完善相关立法,细化AI克隆技术的法律条款,明确侵权行为的定义和责任,同时加强执法力度。平台层面,要强化相关政策法规的宣贯和执行,做好已有数据的保护,应用技术手段来规避AI克隆侵权问题的发生,同时也要建立健全内容审核和侵权举报机制,及时发现和处理侵权问题。 法律专家也提醒AI深度合成制作和信息发布者,不要存侥幸心理,法律没有灰色地带,切勿因小失大。 赵精武强调,法律并不是禁止使用AI合成技术,而是禁止不合法不合理的使用,尤其是禁止不进行任何显著标识和提示的AI合成信息的发布和传播。 来 源 | 21财经客户端、央视新闻、21世纪经济报道(肖潇、陈梦璇)

2025-02-17
finance.stockstar.com
Why's our monitor labelling this an incident or hazard?
This describes multiple incidents where AI systems for deepfake audio/video were developed and used without authorization, directly causing rights violations and fraud losses. These are concrete harms from misuse of AI, so it is an AI Incident.
Thumbnail Image

曝光!雷军、刘德华成受害者

2025-02-17
21jingji.com
Why's our monitor labelling this an incident or hazard?
This is an AI Incident because it describes actual harms—unauthorized AI‐generated deepfakes violating individuals’ legal rights and enabling fraud schemes that defrauded victims of billions. An AI system’s misuse directly led to these infringements and economic harms.
Thumbnail Image

21视频 雷军、刘德华成受害者 AI合成名人音视频乱象曝光!别被AI骗了!

2025-02-17
21jingji.com
Why's our monitor labelling this an incident or hazard?
Deep‐synthesis AI systems are actively being used to produce and distribute infringing audio/video of public figures without consent, leading to rights violations (intellectual property and publicity rights) and consumer deception. This constitutes direct harm from the use of AI systems.
Thumbnail Image

从恶搞企业家雷军到AI制作张文宏医生音视频为自己带货,过去一年,AI深度合成音视频侵权现象愈演愈烈。此前就曾有网友曾利用AI制作的刘德华声音,为自己博取流量,刘德华电影公司还紧急发布声明提醒网民,不要落入伪造刘德华声音的AI语音合成技术骗局。据央视新闻近日调查发现,实现AI深度合成音视频并不算难事,甚至在一些购物平台,AI深度合成技术已经成为众多网店牟利的工具,只需花费几十元,就可定制AI深度合成名人音视频。21世纪经济报道记者此前曾实测了一款个性化定制的文生音模型,198元就可以定制一个角色,千元出头能定制6个角色。该产品页面上,已经看到了杨幂、刘亦菲、丁真、成龙等人的声音,只要像加入购物车一样,把克隆声音添加到角色库里即可。相关法律法规明确规定,未经授权,不得深度合成制作发布他人信息。网店接单AI深度合成制作名人视频,已经属于侵权行为,并应该承担相应的法律责任。

2025-02-17
finance.stockstar.com
Why's our monitor labelling this an incident or hazard?
The described AI system (deep‐fake audio/video generators) has been used to create unauthorized celebrity content, directly violating legal rights and causing reputational and economic harm. These harms have materialized—consumers are deceived, and celebrities’ rights are infringed—meeting the criteria for an AI Incident (specifically a violation of human and intellectual property rights).
Thumbnail Image

雷军、刘德华成受害者!

2025-02-17
finance.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The report details actual instances where AI deep synthesis software is used to produce and distribute celebrity voice and face clones without consent, infringing on personal and intellectual property rights. This misuse of AI has directly led to legal cases and recognized harm, fitting the definition of an AI Incident (violation of human rights/intellectual property).
Thumbnail Image

AI声音滥用亟待法律规范

2025-02-17
society.people.com.cn
Why's our monitor labelling this an incident or hazard?
While the article references real harms caused by AI-generated voice impersonation (scams, misinformation, consumer deception), its primary focus is on advocating legal regulation and platform governance measures to address and prevent these abuses. This aligns with a societal/governance response to AI issues rather than reporting a single new incident or warning of a future hazard.
Thumbnail Image

曝光!雷军、刘德华成受害者 AI深度合成音视频侵权现象愈演愈烈

2025-02-17
finance.eastmoney.com
Why's our monitor labelling this an incident or hazard?
The piece details multiple realized harms directly resulting from AI systems (deepface/deepvoice tools): infringement of individuals’ personality and publicity rights; a broken Hong Kong fraud ring that used AI deepfakes to steal over HK$3.6 billion; a Chinese court ruling on an AI voice infringement case; and UNODC data showing a surge in AI‐enabled scam tools. These are actual incidents of harm caused by AI misuse, not speculative risks or mere background context.
Thumbnail Image

AI合成名人音视频乱象揭露:雷军、刘德华成受害者

2025-02-17
GuangZhou Morning Post
Why's our monitor labelling this an incident or hazard?
The report documents realized harms from AI systems: unauthorized deep synthesis of celebrity audio-video used for scams and profit, constituting violations of rights under civil and regulatory frameworks. These are direct incidents where the development and use of AI deepfake tools have already led to infringement and fraud. Thus, it qualifies as an AI Incident.
Thumbnail Image

央视聚焦AI深度合成技术滥用,天融信提示防范AI克隆风险

2025-02-17
stock.stockstar.com
Why's our monitor labelling this an incident or hazard?
While the article references real harms from deepfake scams and privacy violations, its primary focus is on raising awareness, highlighting existing regulations and detection techniques, and advising on multi-level prevention strategies. It serves to contextualize and update stakeholders on policy and governance responses rather than documenting a single new incident or emerging hazard.
Thumbnail Image

央视聚焦AI深度合成技术滥用,天融信提示防范AI克隆风险-证券之星

2025-02-17
wap.stockstar.com
Why's our monitor labelling this an incident or hazard?
The article describes actual harms from AI deep synthesis technology: criminal use to impersonate individuals (e.g., entrepreneurs, doctors), perpetrate fraud, and violate portrait/voice rights. These constitute realized harms directly resulting from AI misuse, fitting the AI Incident definition.
Thumbnail Image

雷军、刘德华成受害者 央视揭露AI合成名人音视频乱象:几十元就能定制

2025-02-16
快科技
Why's our monitor labelling this an incident or hazard?
This is an actual, ongoing series of harms caused by AI systems (deep synthesis of celebrity voices and likenesses) that infringe on rights and facilitate scams. The unauthorized AI-generated content directly leads to legal violations and consumer harm, fitting the definition of an AI Incident.
Thumbnail Image

总台揭AI合成名人音视频乱象 侵权现象频发

2025-02-16
news.china.com
Why's our monitor labelling this an incident or hazard?
The piece describes multiple realized harms directly caused by the use of AI systems for deep synthesis of voices and faces—celebrity impersonations, fraud schemes, and legal infringement cases—meeting the definition of AI Incident (violations of rights, intellectual property infringements, and fraud).
Thumbnail Image

声音权益保护既关乎个人利益,也关乎健康网络秩序的营造,只有进一步加强法律监管、压实主体责任、增强保护意识,我们才能在智能时代的浪潮中既享受科技带来的便利,又遏制住其潜在的滥用。

2025-02-14
opinion.southcn.com
Why's our monitor labelling this an incident or hazard?
The piece does not detail a singular incident or a specific near-miss but rather surveys recurring harms from AI-synthesized voices and focuses on policy responses and regulatory needs. As it primarily provides governance context and recommendations, it qualifies as Complementary Information.
Thumbnail Image

多措并举遏制AI声音滥用

2025-02-14
finance.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The piece focuses on regulatory and technical responses to AI voice synthesis misuse and broader governance proposals, rather than detailing a particular incident or new threat. It therefore constitutes Complementary Information, offering contextual updates on policy and oversight developments in the AI ecosystem.
Thumbnail Image

刘德华雷军成受害者 揭秘AI合成名人乱象

2025-02-17
finance.sina.com.cn
Why's our monitor labelling this an incident or hazard?
This is a case of actual harm: AI voice-cloning systems are being used without consent to impersonate public figures, spreading false and offensive content that infringes on personal rights and misleads audiences. The misuse of these AI systems has directly caused legal, social and informational harm, making it an AI Incident.
Thumbnail Image

AI深度合成对网络安全提出更高要求 上市公司多措并举筑牢"防火墙"

2025-02-17
finance.sina.com.cn
Why's our monitor labelling this an incident or hazard?
No concrete harm or specific misuse event is described; instead, the article centers on the potential risks (deepfakes aiding crime, misleading information) and industry/government responses to shore up defenses. This forward‐looking discussion of plausible threats classifies it as an AI Hazard.
Thumbnail Image

央视揭露AI合成名人音视频乱象:几十元就能定制 - cnBeta.COM 移动版

2025-02-16
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
文中描述AI深度合成系统被用于未经授权制作名人音视频并公开售卖,直接导致侵犯名人及他人信息权利,属于AI系统的恶意或违规使用并已造成侵权伤害,符合“AI Incident”定义。
Thumbnail Image

3·15观点 | AI营销欺骗消费者?这股歪风得刹住!

2025-02-14
finance.sina.com.cn
Why's our monitor labelling this an incident or hazard?
AI systems are being used to fabricate and manipulate video content, impersonating real individuals to promote worthless or potentially harmful products. Consumers have been misled into purchasing ineffective health supplements and delaying proper treatment. This misuse directly results in economic and health harms and infringes on public figures’ personality rights, fitting the definition of an AI Incident.
Thumbnail Image

遏制AI声音滥用 让技术向善而行

2025-02-13
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI voice synthesis systems capable of cloning voices rapidly, which qualifies as AI systems. It highlights the misuse of these AI-generated voices, implying potential harms such as fraud, misinformation, or privacy violations, although no specific incident of harm is described. Since the article focuses on the potential for misuse and the need to prevent harm rather than reporting a realized harm event, it describes a plausible risk scenario where AI voice cloning could lead to harm. Therefore, this event is best classified as an AI Hazard, as it concerns circumstances where AI use could plausibly lead to harm but no specific incident is reported.