Liu Xiaoqing Deepfake Impersonation Incident

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Chinese celebrity Liu Xiaoqing was shocked to discover AI-generated videos imitating her face and voice on the platform by an account linked to Guangzhou Ciyi Biotechnology. Fans quickly reported the fraudulent content, and the account was suspended for rule violations, highlighting ongoing concerns over deepfake misuse and rights infringement.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves an AI system generating synthetic videos and audio impersonating a real person, which is a clear use of AI technology. The harm includes violation of the individual's portrait rights and the spread of misleading content that deceives viewers, which can be considered harm to the person and the community. The AI system's misuse directly led to these harms, fulfilling the criteria for an AI Incident. The subsequent account suspension and platform response are complementary but do not negate the incident classification.[AI generated]
AI principles
AccountabilityPrivacy & data governanceRespect of human rightsRobustness & digital securitySafetyTransparency & explainability

Industries
Media, social platforms, and marketingDigital security

Affected stakeholders
Women

Harm types
ReputationalHuman or fundamental rightsPsychological

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

刘晓庆紧急辟谣:不是我新快报综合2025-3-3

2025-03-03
xkb.com.cn
Why's our monitor labelling this an incident or hazard?
The event describes AI-generated videos and audio impersonating Liu Xiaoqing, which could plausibly lead to harm such as misinformation, reputational damage, or fraud. Since the videos have been widely viewed and caused public concern, the AI system's use could plausibly lead to an AI Incident, but no direct harm is reported as having occurred. Therefore, this qualifies as an AI Hazard due to the credible risk of harm from AI-generated deepfake content.
Thumbnail Image

刘晓庆回应被AI假冒,相关账号被火速封停

2025-03-03
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves an AI system generating synthetic videos and audio impersonating a real person, which is a clear use of AI technology. The harm includes violation of the individual's portrait rights and the spread of misleading content that deceives viewers, which can be considered harm to the person and the community. The AI system's misuse directly led to these harms, fulfilling the criteria for an AI Incident. The subsequent account suspension and platform response are complementary but do not negate the incident classification.
Thumbnail Image

AI刘晓庆点赞过万!本人发声:天哪......

2025-03-03
finance.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used to generate synthetic videos and voices that impersonate real individuals without consent, which constitutes a violation of personal rights (a breach of applicable law protecting fundamental rights). The AI-generated content has caused confusion and potential reputational harm, indicating realized harm to individuals and communities. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's use. The mention of regulatory responses and legal provisions supports the context but does not change the primary classification.
Thumbnail Image

刘晓庆回应被AI假冒,相关账号被火速封停

2025-03-03
yangtse.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system used to create synthetic videos and audio impersonating a real person, which is a clear AI system involvement. The misuse of this AI system has directly led to harm in the form of violation of Liu Xiaoqing's personal rights (image and voice rights) and potential misinformation to the public, which can be considered harm to communities. The rapid suspension of the account shows recognition of the harm caused. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to harm (violation of rights and misinformation).
Thumbnail Image

近8万点赞,刘晓庆震惊发声:不是我

2025-03-03
news.ifeng.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used to generate deepfake videos and audio impersonating real people without authorization. These AI-generated fakes have directly led to harms including identity infringement, potential fraud, misinformation, and social trust erosion. The article reports realized harms (e.g., scams, legal cases of voice rights infringement) and ongoing societal impacts, meeting the criteria for an AI Incident. The discussion of regulatory responses and expert opinions supports the assessment but does not overshadow the primary incident of harm caused by AI misuse.
Thumbnail Image

刘晓庆紧急回应!张文宏、刘德华也曾是受害者

2025-03-03
app.myzaker.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated videos and AI voice synthesis used to impersonate celebrities and public figures, which has led to misinformation and fraudulent promotion of products. This misuse of AI systems directly causes harm to the individuals' reputations and potentially misleads the public, fitting the definition of an AI Incident involving violations of rights and harm to communities. The involvement of AI in generating fake content that has been deployed and caused harm is clear and direct.
Thumbnail Image

AI生成虚假信息泛滥,我们该如何堵截

2025-03-03
app.myzaker.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems generating false videos, fake statements, and misinformation that have already caused harm by misleading the public and damaging social trust and stability. It references real examples, such as AI-generated videos of a public figure and false data spreading online, indicating that harm is occurring. The involvement of AI in producing and disseminating false information that disrupts social order and public trust fits the definition of an AI Incident, as the harm is realized and directly linked to AI system use. The discussion of governance and legal responses is complementary but secondary to the main focus on the harm caused by AI-generated disinformation.
Thumbnail Image

一个视频近8万点赞!刘晓庆:天哪......

2025-03-03
finance.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating synthetic videos and voices impersonating real individuals without consent, leading to violations of their portrait and reputation rights. The article explicitly mentions the harm caused by these AI-generated deepfakes, including defamatory content and public confusion about authenticity. Legal frameworks are cited that prohibit such unauthorized use, confirming the recognition of harm. Since the AI-generated content has already been disseminated and caused harm, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

刘晓庆震惊发声:不是我!雷军、刘德华也是受害者......

2025-03-03
news.ycwb.com
Why's our monitor labelling this an incident or hazard?
The event involves the use and misuse of AI systems (deepfake video and voice synthesis) to create unauthorized synthetic media of public figures, which has directly led to harms such as identity infringement, reputational damage, fraud, and misinformation. These harms fall under violations of rights and harm to communities. The article also references legal cases confirming these harms and ongoing governance efforts, but the primary focus is on the realized harms caused by AI-generated content. Therefore, this qualifies as an AI Incident.
Thumbnail Image

刘晓庆回应被AI假冒,相关账号被火速封停 AI伪造视频引关注

2025-03-04
news.china.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems generating fake videos and audio impersonating real individuals, which constitutes the use of AI systems. The misuse of these AI-generated deepfakes has directly led to violations of personal rights (such as portrait rights and reputation) and the spread of false information, which harms communities and individuals. Therefore, this qualifies as an AI Incident under the definitions provided, as the AI system's use has directly led to harm (violation of rights and misinformation).
Thumbnail Image

AI生成视频屡次假冒公众人物,独家评论:让"AI治理"跑赢"AI作恶" 刻不容缓

2025-03-05
t.cj.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated videos impersonating public figures, which is a clear use of AI systems for generating content. The harm includes misinformation, reputational damage, and potential violation of rights of the individuals impersonated. The incident has already occurred, with significant public impact and official responses, fitting the definition of an AI Incident. The article also discusses the need for governance to prevent such AI misuse, but the primary focus is on the realized harm caused by the AI-generated fake videos.
Thumbnail Image

刘晓庆靳东古天乐均成AI换脸受害者 明星难逃数字傀儡命运

2025-03-06
news.china.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated synthetic videos and audio impersonating celebrities, which constitutes the use of AI systems to create misleading content. This misuse has caused harm to the celebrities' rights and reputation, fulfilling the criteria for an AI Incident under violations of human rights or breach of applicable law protecting fundamental rights. The harm is realized as the celebrities are victims of unauthorized AI-generated content, and legal concerns about infringement and fraud are raised. Therefore, this event qualifies as an AI Incident.