AI Disrupts Chinese Film Industry: Job Losses, Fake Content, and Public Backlash

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

AI systems in China’s film industry have led to significant job losses, economic insecurity, and reputational harm through AI-generated actors, scriptwriting, and fake videos. Public backlash and legal concerns over image and voice likeness violations have prompted regulatory responses, highlighting ongoing harm and ethical challenges.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions AI systems used for deepfake generation (AI face-swapping, voice cloning) that have directly led to harms such as unauthorized use of likeness, misinformation, reputational damage, and potential financial loss due to scams. These harms fall under violations of human rights (portrait and reputation rights) and harm to communities (misleading information). Therefore, this qualifies as an AI Incident because the AI system's use has directly caused harm. The legal responses and restrictions on AI-generated content are complementary information but do not change the primary classification.[AI generated]
AI principles
FairnessRespect of human rights

Industries
Arts, entertainment, and recreationMedia, social platforms, and marketing

Affected stakeholders
Workers

Harm types
Economic/PropertyReputationalHuman or fundamental rights

Severity
AI incident

Business function:
Other

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

2026-03-23
guancha.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems used for deepfake generation (AI face-swapping, voice cloning) that have directly led to harms such as unauthorized use of likeness, misinformation, reputational damage, and potential financial loss due to scams. These harms fall under violations of human rights (portrait and reputation rights) and harm to communities (misleading information). Therefore, this qualifies as an AI Incident because the AI system's use has directly caused harm. The legal responses and restrictions on AI-generated content are complementary information but do not change the primary classification.
Thumbnail Image

AI抢饭碗 明星演员都没饭吃了(视频) - 大陆时政 -

2026-03-23
看中国
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated digital actors and AI short dramas replacing human actors, causing a large-scale loss of employment and income among actors. This is a direct harm to the community (harm to groups of people) caused by the use of AI systems in content generation and production. The harm is realized and ongoing, not merely potential. The AI system's use in this context is central to the harm described, fulfilling the criteria for an AI Incident under the OECD framework.
Thumbnail Image

大声思考丨影视行业地震,只是一场前哨战_腾讯新闻

2026-03-24
QQ新闻中心
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI video generation models and AI tools used in scriptwriting, editing, and virtual production, which are AI systems by definition. It details how these AI systems have already caused significant job losses and economic harm to many workers in the film industry, fulfilling the criteria for harm to people and communities. The harm is realized and ongoing, not merely potential. Hence, the event is an AI Incident because the development and use of AI systems have directly led to harm (job displacement and economic insecurity) in a specific sector.
Thumbnail Image

霍启刚自曝遭AI盗用:提取我的声线样貌生成短视频,但完全不是我的观点

2026-03-23
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The event clearly involves AI systems used to generate fake videos and voice content without consent, leading to misinformation and reputational harm, which constitutes harm to the individual and potentially to the community through misleading information and fraud. This meets the criteria for an AI Incident because the AI system's use has directly led to violations of personal rights and harm to the individual. The presence of AI is explicit, the harm is realized, and legal violations are identified, confirming this classification.
Thumbnail Image

插话绘|AI 取代演员?别慌,它淘汰的只是"表演工具人"

2026-03-23
扬子网(扬子晚报)
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to generate actor performances, which is an AI system involvement in content creation. The use of AI actors has led to the displacement of certain human actors performing repetitive roles, which can be considered harm to employment and livelihoods (harm to groups of people). This harm is realized as the article mentions actors losing roles and the industry shifting. Therefore, this qualifies as an AI Incident due to indirect harm to workers caused by AI use in the industry. The article does not describe a potential future harm but an ongoing impact, so it is not an AI Hazard. It is not merely complementary information because it focuses on the impact of AI use causing displacement, not just updates or responses. It is not unrelated as AI systems are central to the event.
Thumbnail Image

"AI演员"会淘汰真人演员吗?媒体:精进自己又何须焦虑

2026-03-23
华商网
Why's our monitor labelling this an incident or hazard?
The article does not describe any event where AI use has directly or indirectly caused harm or violation of rights. It discusses AI's application in the entertainment industry as a tool for efficiency and artistic enhancement, with no indication of realized or potential harm. The focus is on industry adaptation and human skill development rather than any AI-related incident or hazard. Therefore, it fits best as Complementary Information, providing context and societal perspective on AI's evolving role in the industry.
Thumbnail Image

"拼脸"拼不出AI演员的未来

2026-03-23
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems in generating digital actors, indicating AI system involvement. However, it does not describe a specific AI Incident where harm has directly or indirectly occurred, nor does it report a concrete AI Hazard event with plausible future harm beyond general concerns. Instead, it highlights public and ethical concerns about AI use in image synthesis and potential rights violations, serving as a societal and governance-related discussion. This aligns with the definition of Complementary Information, as it enhances understanding of AI's societal impact and ethical challenges without reporting a new incident or hazard.
Thumbnail Image

耀客传媒签约的AI演员秦凌岳和林汐颜,具体有哪些技术特点和能力?

2026-03-23
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The presence of AI systems is clear, as the actors are AI-generated virtual characters created through advanced algorithmic and multi-modal models. The article discusses their development and use, including potential ethical and legal challenges and industry impacts. However, no direct or indirect harm has been reported as having occurred, nor is there a specific event indicating plausible imminent harm. The concerns raised are potential or ongoing debates rather than realized incidents or imminent hazards. Thus, the article fits the definition of Complementary Information, providing supporting context and discussion about AI systems and their societal effects without reporting a new AI Incident or AI Hazard.
Thumbnail Image

AI演员冲击短剧行业,流水线式表演面临淘汰

2026-03-23
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article describes the use of AI actors in content production, which involves AI systems generating performances. However, it does not report any injury, rights violations, property or community harm, or other significant harms caused by AI. Instead, it frames AI as a disruptive but ultimately beneficial force for industry improvement. There is no indication of realized or potential harm that would qualify as an AI Incident or AI Hazard. The content is primarily an analysis and commentary on AI's role in the entertainment sector, making it Complementary Information.
Thumbnail Image

观察|AI演员越来越多,但它不是观众的选择

2026-03-23
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI actors being used to create content, including unauthorized use of real actors' images leading to legal cases of portrait rights infringement, which is a violation of intellectual property rights under applicable law. This constitutes harm caused by the use of AI systems. Additionally, public resistance and criticism reflect harm to communities in terms of cultural and emotional impact. Therefore, the event qualifies as an AI Incident due to realized harm linked to AI system use.
Thumbnail Image

AI演员都开始演短剧了?

2026-03-23
新浪财经
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (AI digital actors, AI-generated dramas) whose use has directly caused multiple harms: potential intellectual property violations (image and voice likeness), social harm (public backlash, ethical concerns), and labor market disruption (fear of job loss among actors). The regulatory response further confirms the recognized harm and need for governance. The harms are realized and ongoing, not merely potential. Hence, the classification as an AI Incident is appropriate.
Thumbnail Image

韩雪谈AI重塑演员行业:五年之内大变局,是危机还是机遇?

2026-03-22
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article primarily offers an expert opinion and industry analysis about the transformative potential of AI in the acting profession over the next five years. It does not describe a concrete event involving AI causing harm or a credible imminent risk of harm. The discussion is about ongoing trends and future possibilities rather than a realized incident or a specific hazard. Therefore, it fits best as Complementary Information, providing context and insight into AI's impact on the entertainment sector without reporting an AI Incident or AI Hazard.
Thumbnail Image

耀客传媒签约AI演员引抵制 肖像权与合规问题成焦点- DoNews

2026-03-22
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems creating digital actors (AI-generated images and performances) that closely resemble real human actors, which is a direct use of AI technology. The public and legal experts have raised concerns about potential portrait rights infringement and unfair competition, indicating a credible risk of legal and reputational harm. However, the article does not report any confirmed legal rulings or realized harm yet, only public resistance and legal opinions about possible infringement. Thus, the AI system's use here plausibly could lead to an AI Incident but has not yet caused one. This fits the definition of an AI Hazard, as the AI system's use could plausibly lead to violations of rights and other harms if unaddressed.
Thumbnail Image

AI演员出道,我们到底在抵制什么?

2026-03-22
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI digital actors and their use in productions, indicating AI system involvement. However, it does not describe any realized harm such as injury, rights violations, or disruption caused by these AI actors. The resistance and debate are societal reactions and concerns about potential future impacts, not evidence of an AI Incident or Hazard. The article also references positive uses of AI for human empowerment, emphasizing the need for human-centered AI application. Thus, it fits the definition of Complementary Information, providing insight into societal and governance responses to AI in entertainment.
Thumbnail Image

AI短剧在情感表达上存在哪些具体的技术缺陷和'恐怖谷效应'?

2026-03-21
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The content centers on the analysis of AI short drama technology and its emotional expression shortcomings, including the uncanny valley effect, without reporting any realized harm or incident involving AI systems. There is no mention of injury, rights violations, disruption, or other harms caused by AI, nor a credible imminent risk of such harms. The article is primarily an expert commentary and overview of challenges and potential improvements in AI-generated acting, which fits the definition of Complementary Information as it provides contextual and technical insights into AI systems and their societal reception without describing a specific AI Incident or AI Hazard.
Thumbnail Image

两会结束才半月,演艺圈巨变,不少人面临失业,真让冯远征说对了_手机网易网

2026-03-23
m.163.com
Why's our monitor labelling this an incident or hazard?
The article centers on the evolving use of AI in the entertainment industry and its potential to disrupt employment and creative processes. However, it does not document a realized harm (such as actual job losses directly caused by AI, legal violations, or health/safety incidents) nor does it describe a concrete event where AI use has led or nearly led to harm. Instead, it offers analysis, opinions, and industry responses to the growing presence of AI-generated content. This aligns with the definition of Complementary Information, as it enhances understanding of AI's societal and economic impacts and governance considerations without reporting a new AI Incident or AI Hazard.
Thumbnail Image

两会刚落幕半个月,演艺圈就开始大洗牌,不少人饭碗悬了,冯远征这话真说中了_手机网易网

2026-03-23
m.163.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (AI-generated virtual actors) in the entertainment industry, directly impacting human actors' employment and livelihoods. The harm is realized as many actors lose roles or face reduced opportunities due to AI competition, which is a form of harm to groups of people. The article explicitly describes this impact and the resulting challenges faced by actors, making it an AI Incident under the framework. The AI system's use in replacing human actors is the direct cause of this harm, fulfilling the criteria for an AI Incident.