Chinese Actor Wang Jinsong's Likeness Deepfaked by AI, Raising Legal and Fraud Concerns

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Chinese actor Wang Jinsong's image and voice were used without consent in a highly realistic AI-generated video, causing confusion even among his family. The incident highlights growing concerns over AI-enabled impersonation, intellectual property violations, and potential for fraud. Authorities have taken action against similar cases involving other celebrities in China.[AI generated]

Why's our monitor labelling this an incident or hazard?

An AI system was used to generate a realistic video impersonating the actor, constituting unauthorized use of his likeness and voice, which is a violation of personal rights and intellectual property. The harm has materialized as the actor's image was misused, and there is a plausible risk of more severe harms like AI-enabled scams. Since the misuse has already occurred and caused harm, this qualifies as an AI Incident.[AI generated]
AI principles
Privacy & data governanceRespect of human rights

Industries
Media, social platforms, and marketing

Affected stakeholders
Other

Harm types
ReputationalHuman or fundamental rightsPsychological

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

演员王劲松发文怒斥:太可怕了!

2026-02-26
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate a realistic video impersonating the actor, constituting unauthorized use of his likeness and voice, which is a violation of personal rights and intellectual property. The harm has materialized as the actor's image was misused, and there is a plausible risk of more severe harms like AI-enabled scams. Since the misuse has already occurred and caused harm, this qualifies as an AI Incident.
Thumbnail Image

演员王劲松怒斥:太可怕了!

2026-02-27
cj.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event clearly involves AI systems used maliciously to create fake videos and audio impersonations of public figures, which has directly caused violations of personal rights and misleading commercial practices. These harms fall under violations of human rights and legal protections, specifically portrait and reputation rights, as well as consumer protection laws. The direct misuse of AI-generated content to deceive and profit constitutes an AI Incident. The involvement of regulatory actions and public complaints further confirms the realized harm and legal breaches.
Thumbnail Image

评论 1

2026-02-26
guancha.cn
Why's our monitor labelling this an incident or hazard?
An AI system is involved as the videos are generated using AI to replicate the actor's image, voice, and lip movements. The actor reports that these AI-generated videos have been widely circulated, causing confusion and distress to him and his family, indicating realized harm related to identity rights and potential reputational damage. Although the videos have been deleted, the actor expresses concern about more sophisticated and malicious uses in the future, such as scams. This constitutes an AI Incident because the AI-generated content has already caused harm through unauthorized use of personal likeness and the potential for fraud, which is a violation of rights and harm to the individual and community.
Thumbnail Image

记者实测AI"魔改"明星:1分钟报价数百元,平台审核存盲区,多明星同陷骗局

2026-02-27
bjnews.com.cn
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used to generate fake videos of celebrities without authorization, which directly leads to harm including violation of personal rights and facilitation of fraudulent schemes. The AI-generated videos cause reputational damage to the celebrities and financial harm to the public through scams. The platforms' inability to fully detect and prevent such content exacerbates the harm. The involvement of AI in the creation and dissemination of these videos is central to the incident, fulfilling the criteria for an AI Incident as the AI system's use has directly led to significant harm to individuals and communities.
Thumbnail Image

演员王劲松发文:太可怕了! 2026-02-26 21:38

2026-02-26
每日经济新闻
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate a deepfake video impersonating the actor, which constitutes unauthorized use of his likeness and voice. This is a violation of personal rights and potentially intellectual property rights. The harm has occurred as the actor and his family were unable to distinguish the fake video from reality, causing reputational and personal harm. Therefore, this event qualifies as an AI Incident due to the realized violation of rights and harm caused by the AI-generated content.
Thumbnail Image

演员王劲松遇自己AI视频直呼太可怕:完全看不出来真假

2026-02-27
驱动之家
Why's our monitor labelling this an incident or hazard?
An AI system was used to create a deepfake video that infringed on the actor's portrait rights, constituting a violation of intellectual property and personal rights. The harm has already occurred as the actor's image was misused without consent, which fits the definition of an AI Incident involving violations of human rights or intellectual property rights. The actor's concerns about future misuse and legal preparedness are relevant but secondary to the realized harm from the AI-generated content.
Thumbnail Image

演员王劲松发文怒斥被AI盗用 担忧诈骗风险

2026-02-27
中华网科技公司
Why's our monitor labelling this an incident or hazard?
An AI system was used to create a realistic fake video of the actor without consent, constituting a violation of his rights (image and possibly intellectual property rights). The harm has occurred in the form of unauthorized use and potential reputational damage, and there is a credible risk of future harms such as fraud. Since the video was posted and then deleted, the incident of unauthorized AI-generated content has already taken place, making this an AI Incident due to realized harm (violation of rights) and potential for further harm.
Thumbnail Image

演员王劲松发文怒斥被AI盗用:太可怕了!

2026-02-27
police.news.sohu.com
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate videos impersonating the actor's likeness and voice without consent, which is a direct misuse of AI technology causing harm to the individual's rights (violation of intellectual property and personal rights). The harm is realized as the videos were publicly distributed and caused confusion and concern. The actor's complaint and the deletion of videos confirm the incident's occurrence. The potential for future harm such as fraud is also noted, but the realized harm already classifies this as an AI Incident rather than just a hazard or complementary information.
Thumbnail Image

新浪AI热点小时报丨2026年02月26日22时_今日实时AI热点速递

2026-02-26
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The actor Wang Jinsong's AI-generated video impersonation involves an AI system creating realistic video and audio of a person without consent, directly leading to harm through identity theft and potential rights violations. The article explicitly states the harm has occurred and the actor's concern about future criminal misuse. Similarly, the AI-generated stranger WeChat friend requests represent misuse of AI to obtain personal contact information, which can lead to privacy harm or scams. These harms fall under violations of rights and harm to individuals. Other parts of the article describe AI product launches, financial results, and industry trends without direct harm, which are complementary information. Given the presence of realized harm caused by AI misuse, the event is best classified as an AI Incident.
Thumbnail Image

新浪AI热点小时报丨2026年02月28日10时_今日实时AI热点速递

2026-02-28
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The unauthorized AI face-swapping of actor Wang Jinsong's image for advertising without consent directly harms his personal rights, fitting the definition of an AI Incident (violation of rights). The U.S. government's ban on Anthropic and Pentagon's acceptance of OpenAI's safety rules are policy and governance responses, not new incidents or hazards, thus Complementary Information. The humanoid robot production and cyber monk development are AI system deployments with no reported harm, so they are Complementary Information. Other news items do not describe AI incidents or hazards. Hence, the main harm event is the AI face-swap misuse, classified as AI Incident, while the rest are Complementary Information.
Thumbnail Image

王劲松称AI盗用形象生成视频以假乱真,家人难分辨

2026-02-26
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI to generate fake videos impersonating a real person, which is a clear example of an AI system's use leading to harm. The harm includes violation of personal rights (image and voice theft), potential psychological harm to the individual and their family, and the risk of future fraud or scams. Since the AI-generated videos have already been created and circulated, causing confusion and concern, this constitutes an AI Incident rather than a mere hazard or complementary information.
Thumbnail Image

记者实测AI"魔改"明星:1分钟报价数百元,多明星同陷骗局

2026-02-28
新浪财经
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems for video manipulation (AI 'magic modification' or 'AI dubbing') that directly led to harm by infringing on celebrities' rights and enabling fraudulent schemes that harm the public's financial security. The AI system's use here is central to the harm, fulfilling the criteria for an AI Incident. The harm includes violations of intellectual property and personal rights, as well as harm to communities through fraud and illegal fundraising. The article also discusses platform responses and legal considerations, but the primary focus is on the realized harm caused by AI-generated fake videos used in scams.
Thumbnail Image

王劲松怒斥AI盗用形象,肖像权保护面临新挑战!

2026-02-26
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves an AI system generating synthetic videos that directly infringe on Wang Jinsong's portrait rights and personal image, constituting a violation of intellectual property and personal rights under applicable law. The harm has already occurred as the videos were created and disseminated, causing distress and rights violations. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm (violation of rights). The discussion of future risks and legal challenges further contextualizes the incident but does not change the classification.
Thumbnail Image

王劲松AI换脸事件引热议:明星肖像权保护面临新挑战

2026-02-26
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used to generate realistic deepfake videos that infringe on the actor's portrait rights, constituting a violation of personal and intellectual property rights. The harm is realized as the actor and his family are unable to distinguish real from AI-generated content, and the videos were removed only after complaints, indicating actual infringement. The article focuses on the incident of AI misuse causing harm and the legal context, making it an AI Incident rather than a mere hazard or complementary information.
Thumbnail Image

演员王劲松发文怒斥:太可怕了!

2026-02-26
新浪军事频道
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate realistic videos impersonating the actor without consent, constituting a violation of his rights (specifically, personal and intellectual property rights). The harm has already occurred as the actor's image and voice were stolen and used in videos, causing reputational and personal distress. Therefore, this qualifies as an AI Incident due to the realized harm from AI misuse involving identity theft and potential for further criminal use.
Thumbnail Image

演员王劲松怒斥:太可怕了

2026-02-27
新浪财经
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems used to create fake videos and cloned voices of public figures, which have been used for unauthorized commercial purposes and false advertising. These actions constitute violations of portrait and reputation rights under law, causing direct harm to the individuals impersonated and misleading consumers. The involvement of AI in generating these fake contents and the resulting legal penalties confirm that this is an AI Incident involving realized harm due to AI misuse.
Thumbnail Image

被AI盗用形象生成视频,演员王劲松:太可怕了

2026-02-26
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
An AI system is clearly involved as the videos are generated using AI to replicate the actor's image, voice, and lip movements. The misuse of AI to create these deepfake videos has directly led to harm in the form of violation of the actor's rights (image and voice rights) and potential harm to his reputation and personal security. The actor's concern about future, more malicious uses such as scams further underscores the seriousness of the harm. Therefore, this event qualifies as an AI Incident due to realized harm from AI misuse.
Thumbnail Image

演员王劲松发文怒斥:太可怕了!

2026-02-26
新浪财经
Why's our monitor labelling this an incident or hazard?
An AI system was used to create a deepfake video impersonating the actor, which constitutes unauthorized use of his likeness and voice. This is a violation of personal rights and intellectual property, and the actor expresses concern about potential future harms such as fraud and criminal activities enabled by such AI-generated content. Since the AI-generated content has already been created and distributed (even if later deleted), and the harm of rights violation and potential for fraud is realized or imminent, this qualifies as an AI Incident under the definitions provided.
Thumbnail Image

人在家中坐,祸从天上来!58岁演员王劲松,终究是步了靳东的后尘_手机网易网

2026-02-27
m.163.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used to generate fake videos and audio impersonations (deepfakes) that have directly led to harm, including violations of individuals' rights (privacy and portrait rights), financial scams causing harm to victims, and reputational damage to public figures. These harms fall under violations of human rights and harm to communities. Since the harm is occurring and has occurred, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

人在家中坐,祸从天上来!58岁演员王劲松,终是步入了靳东的后尘_手机网易网

2026-02-27
m.163.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI deepfake technology to generate fake videos and audio of Wang Jinsong, which have been disseminated without authorization, causing harm to his personal rights and posing risks of financial fraud. The AI system's role is pivotal in creating indistinguishable fake content, leading to direct harm (violation of rights and potential financial harm). The incident also references similar harms experienced by another actor, Jin Dong, reinforcing the pattern of AI misuse causing real harm. The platform's response (content removal) is insufficient to prevent ongoing harm, underscoring the seriousness of the incident. This fits the definition of an AI Incident as the AI system's use has directly led to violations of rights and potential harm to property (financial fraud).
Thumbnail Image

被王劲松举报的视频里,还出现了李连杰、姚明、李亚鹏、于和伟、唐国强_手机网易网

2026-02-27
m.163.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used to generate deepfake videos that impersonate celebrities without authorization, which constitutes a violation of their portrait and related rights, a breach of applicable laws protecting personal rights. The videos have been used commercially to promote a financial product falsely linked to a reputable enterprise, misleading the public and causing reputational and potential financial harm. The involvement of AI in creating realistic synthetic media is central to the harm. The incident has already caused harm (violation of rights and misleading the public), meeting the criteria for an AI Incident rather than a hazard or complementary information. The article also discusses legal implications and platform responsibilities, reinforcing the classification as an incident.
Thumbnail Image

王劲松发声:AI擅自伪造其肖像声音,呼吁加强深度伪造治理

2026-02-27
ai.zol.com.cn
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved in generating deepfake videos using the actor's likeness and voice without authorization, constituting a violation of personal rights (a breach of applicable law protecting intellectual property and personality rights). The harm has already occurred (unauthorized use and potential reputational damage), and there is a credible risk of future harms such as fraud. Therefore, this qualifies as an AI Incident due to realized harm and ongoing risks related to AI misuse.
Thumbnail Image

王劲松忧AI换脸逼真难辨,吁加强肖像权保护与平台监管

2026-02-27
ai.zol.com.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems generating realistic fake videos (deepfakes) of a person without authorization, which is a direct use of AI technology. The harm includes violation of portrait rights (a legal and fundamental right), potential for fraud and identity misuse, and reputational damage. The AI-generated content has been disseminated widely, causing actual harm, not just potential harm. Therefore, this qualifies as an AI Incident under the definitions provided, as the AI system's use has directly led to violations of rights and harm to the individual and community.
Thumbnail Image

锐评|侵权老戏骨的AI视频删了,然后呢?

2026-02-27
news.bjd.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used to generate synthetic videos and audio that infringe on personal rights and cause harm such as identity theft, fraud, and reputational damage. These harms are occurring and have been documented, fulfilling the criteria for an AI Incident under violations of human rights and harm to communities. The article describes realized harms from AI misuse rather than just potential risks or general commentary, so it qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

王劲松遭AI伪造视频侵权,呼吁加强人格权法治保障

2026-02-27
ai.zol.com.cn
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system generating a fake video that infringes on the actor's personality rights, which is a violation of fundamental rights under applicable law. The harm has already occurred as the unauthorized video was publicly disseminated, causing reputational and personal rights harm. The AI system's use directly led to this harm, fulfilling the criteria for an AI Incident. The article also discusses broader implications and calls for legal and ethical responses, but the primary focus is on the realized harm from the AI-generated content.
Thumbnail Image

演员王劲松发文怒斥AI盗用:太可怕了!

2026-02-26
t.cj.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system generating a video that impersonates the actor without authorization, which is a direct misuse of AI technology causing harm to the individual's rights (violation of personal and intellectual property rights). The actor's concern about future criminal uses such as fraud further underscores the seriousness of the harm. The deletion of the video after complaint does not negate the fact that harm occurred. Hence, this is an AI Incident as per the definitions provided.
Thumbnail Image

太可怕了!无锡籍著名演员发帖怒斥_手机网易网

2026-02-26
m.163.com
Why's our monitor labelling this an incident or hazard?
An AI system is clearly involved as the event centers on AI-generated synthetic video content impersonating a real person. The use of this AI system has directly led to harm in the form of violation of the actor's portrait rights and potential reputation damage, which are recognized legal rights. The video was removed after complaints, indicating the harm was realized. The event also highlights concerns about future misuse for scams, but the primary focus is on the actual incident of AI-generated impersonation causing rights violations. Therefore, this qualifies as an AI Incident due to realized harm involving AI misuse infringing on personal rights.
Thumbnail Image

王劲松发声:太可怕了_手机网易网

2026-02-26
m.163.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to generate deepfake videos that impersonate a real person without consent, constituting a violation of portrait rights, which falls under violations of human rights or breach of applicable law protecting fundamental rights. The harm is realized as the actor and his family cannot distinguish real from fake videos, and the content was removed only after complaints, indicating direct harm. Therefore, this qualifies as an AI Incident due to the realized harm caused by AI-generated deepfake content infringing on personal rights.
Thumbnail Image

演员王劲松称自己被AI盗用形象生成视频:太可怕了,声音、口型完全看不出来真假,已投诉_手机网易网

2026-02-26
m.163.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used to create synthetic videos and audio impersonating real individuals without consent, which is a misuse of AI technology. The harm includes violation of personal rights and potential reputational damage, which falls under violations of human rights and intellectual property rights. Since the harm is realized (videos were created and circulated), this qualifies as an AI Incident rather than a hazard or complementary information.