AI-Generated Videos Cause Misinformation and Legal Violations in China

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

AI systems have been used to generate fake videos and images, spreading misinformation about geopolitical conflicts and altering film content. These AI-modified videos infringe intellectual property and portrait rights, causing harm to rights holders and public trust. The incidents have prompted legal and regulatory scrutiny in China.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves AI systems used to generate altered video content that infringes on intellectual property and personal rights, causing realized harm to rights holders and communities through misinformation and unauthorized use. The AI system's use directly leads to violations of intellectual property and portrait rights, which are breaches of applicable law protecting fundamental rights. The article also describes regulatory actions and platform interventions as responses to these harms. Therefore, this qualifies as an AI Incident due to the direct and ongoing harm caused by the AI-generated content and its legal implications.[AI generated]
AI principles
Respect of human rightsTransparency & explainability

Industries
Media, social platforms, and marketingArts, entertainment, and recreation

Affected stakeholders
BusinessGeneral public

Harm types
Human or fundamental rightsPublic interest

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

"AI魔改"引发关注 生意背后有什么法律风险?

2026-03-18
news.cn
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used to generate altered video content that infringes on intellectual property and personal rights, causing realized harm to rights holders and communities through misinformation and unauthorized use. The AI system's use directly leads to violations of intellectual property and portrait rights, which are breaches of applicable law protecting fundamental rights. The article also describes regulatory actions and platform interventions as responses to these harms. Therefore, this qualifies as an AI Incident due to the direct and ongoing harm caused by the AI-generated content and its legal implications.
Thumbnail Image

"AI魔改":生意背后有法律风险

2026-03-18
法制日报
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used to generate altered video content that infringes on intellectual property and portrait rights, which are legal rights protected under applicable law. The article documents actual harm occurring through unauthorized use and dissemination of AI-modified videos, including potential harm to public interest and historical truth. The AI systems' use directly leads to violations of rights and societal harm, fulfilling the criteria for an AI Incident. The article also discusses legal responses and platform actions, but the primary focus is on the realized harms caused by AI-generated content, not just responses or potential risks.
Thumbnail Image

2026-03-16
中华网军事频道
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating fake videos and images that have been widely disseminated on social media, leading to misinformation about a sensitive geopolitical conflict. This misinformation can harm communities by spreading false narratives, causing confusion, panic, or misinformed public opinion. Since the AI-generated content has already been released and viewed by millions, the harm is realized, constituting an AI Incident under the definition of harm to communities due to misinformation caused by AI-generated content.
Thumbnail Image

2026-03-16
中华网军事频道
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating fake videos that are widely spread and cause misinformation, which harms communities by misleading the public and potentially escalating tensions. The AI system's use in creating and disseminating false content directly leads to harm as defined by harm to communities. Therefore, this qualifies as an AI Incident due to the realized harm caused by AI-generated disinformation in a conflict context.
Thumbnail Image

AI视频生成进入"物理真实"时代?团队发布首个物理规律评测基准VMBench_运动_评估_评分

2026-03-17
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (AI video generation) and a new evaluation benchmark (VMBench) designed to assess and improve the physical realism of AI-generated videos. However, the article does not report any incident of harm caused by AI-generated videos, nor does it describe a plausible future harm scenario. Instead, it presents a positive advancement in AI system evaluation and quality control. Therefore, this is best classified as Complementary Information, as it provides supporting context and development in the AI ecosystem without describing an AI Incident or AI Hazard.
Thumbnail Image

当"ai换脸"撞上版权铁壁

2026-03-17
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The AI system (face-swapping algorithm) was explicitly used to generate altered videos by replacing faces in copyrighted original videos without authorization. This use directly led to a violation of the copyright holder's information network dissemination rights, a breach of intellectual property rights under applicable law. The harm is realized and legally recognized, with the court ordering compensation and requiring the infringing content to be taken down. Therefore, this qualifies as an AI Incident due to the direct involvement of AI in causing a legal rights violation and economic harm to the original content creator.
Thumbnail Image

"AI魔改":生意背后有法律风险

2026-03-17
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used to generate altered video content that infringes on copyright and portrait rights, which are legal rights protected under applicable law. The harms are realized, as the videos are published and monetized, causing violations of intellectual property and personal rights. The article also mentions enforcement actions and platform responsibilities, confirming the harm is occurring and recognized. Hence, this fits the definition of an AI Incident due to direct harm caused by AI system use in content generation and distribution infringing rights and potentially harming communities through misinformation or distortion.
Thumbnail Image

2026-03-21
guancha.cn
Why's our monitor labelling this an incident or hazard?
The event involves AI-generated content in short videos, which implies the involvement of AI systems. However, the article does not describe any actual harm caused by AI systems, nor does it report any incident or malfunction leading to harm. Instead, it focuses on regulatory and governance responses to potential misinformation and social disruption risks associated with unlabeled AI-generated content. Therefore, this is a case of Complementary Information, as it provides context on societal and governance responses to AI-related risks without reporting a new AI Incident or AI Hazard.
Thumbnail Image

中央网信办指导网站平台全面规范短视频内容标注

2026-03-21
news.cn
Why's our monitor labelling this an incident or hazard?
The event involves AI-generated content as part of the short videos requiring labeling, indicating the presence of AI systems in content creation. However, the article does not describe any specific harm that has directly or indirectly resulted from AI systems, nor does it report a particular incident or malfunction causing harm. Instead, it outlines governance and regulatory responses to potential misinformation and social disruption risks associated with AI-generated and other misleading content. Therefore, this is best classified as Complementary Information, as it provides context on societal and governance responses to AI-related risks rather than reporting a new AI Incident or AI Hazard.
Thumbnail Image

中国整治短视频标注乱象 六平台处置违规账号逾3400个

2026-03-21
早报
Why's our monitor labelling this an incident or hazard?
The event involves AI-generated content as a source of misleading short videos that have caused harm by misleading public perception and disturbing social order, which qualifies as harm to communities. The regulatory and platform responses aim to mitigate this harm by enforcing labeling and removing violators. Since the AI-generated content has already caused social disruption and platforms have removed violating content and accounts, this constitutes an AI Incident due to realized harm linked to AI system use (AI-generated videos) and their misuse or lack of proper labeling leading to public harm. The event is not merely a future risk or complementary information but describes concrete harm and remediation actions.
Thumbnail Image

中央网信办指导网站平台全面规范短视频内容标注工作

2026-03-21
法制日报
Why's our monitor labelling this an incident or hazard?
The event involves AI-generated content as part of the short videos requiring labeling and regulation. However, the article focuses on the regulatory and governance response to existing issues rather than describing a specific AI system causing harm or a direct incident. There is no report of actual harm caused by AI systems here, nor a specific AI hazard event. Instead, this is a governance and societal response to mitigate potential harms from AI-generated and other misleading content. Therefore, it fits the definition of Complementary Information, as it provides important context and updates on societal and governance responses to AI-related challenges without describing a new AI Incident or AI Hazard.
Thumbnail Image

中央网信办指导网站平台全面规范短视频内容标注工作

2026-03-21
东方财富网
Why's our monitor labelling this an incident or hazard?
The article discusses regulatory and governance actions to standardize labeling of short videos, including those with AI-generated content, to prevent misinformation and social disruption. It does not report a specific AI Incident (no direct or indirect harm caused by AI systems is described) nor does it describe a plausible future harm event (AI Hazard). Instead, it details societal and governance responses to existing challenges posed by AI-generated or manipulated content, fitting the definition of Complementary Information.
Thumbnail Image

多段美以伊冲突视频被证伪 AI造假引发关注

2026-03-21
中华网科技公司
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI-generated fake videos that have been widely spread online, causing confusion and misinformation about ongoing military conflicts. The AI systems' use in fabricating these videos directly leads to harm by misleading the public and disrupting truthful information flow, which fits the definition of an AI Incident involving harm to communities. The harm is realized, not just potential, as the misinformation is actively circulating and affecting public perception. Hence, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

中央网信办指导网站平台全面规范短视频内容标注工作

2026-03-21
新浪财经
Why's our monitor labelling this an incident or hazard?
The article does not report a specific AI Incident or AI Hazard where harm has occurred or is imminent due to an AI system malfunction or misuse. Instead, it details a governance response to existing challenges posed by AI-generated and other misleading short video content. The main narrative is about regulatory measures, platform compliance, and enforcement actions to improve content labeling and transparency. This fits the definition of Complementary Information, as it provides updates on societal and governance responses to AI-related issues, rather than describing a new incident or hazard itself.
Thumbnail Image

明星苦AI视频久矣:被换脸、被代言、被明码标价

2026-03-19
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to generate realistic fake videos of celebrities without their consent, leading to direct harm such as infringement of portrait rights and reputational damage, which are violations of legal and human rights protections. The article provides multiple examples of such harms occurring, including fraud and negative impacts on celebrities. The commercial sale of these AI-generated videos further indicates ongoing and active misuse of AI technology causing harm. Hence, this is a clear AI Incident as per the definitions provided.
Thumbnail Image

中央网信办:短视频含AI生成等内容应标尽标

2026-03-21
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves AI-generated content in short videos and addresses the harm caused by unmarked AI-generated or manipulated content misleading the public and disrupting social order, which constitutes harm to communities and information integrity. However, the article primarily discusses regulatory and governance responses to this issue rather than a specific AI Incident or a direct AI Hazard event. There is no description of a particular incident causing harm or a specific AI system malfunction. Instead, it focuses on policy enforcement and platform actions to mitigate existing or potential harms. Therefore, this is best classified as Complementary Information, as it provides important context and updates on societal and governance responses to AI-related harms in the digital content ecosystem.
Thumbnail Image

中央网信办指导网站平台全面规范短视频内容标注工作

2026-03-21
news.bjd.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves AI-generated content as part of the short videos requiring labeling, indicating the presence of AI systems generating content. However, the article does not describe a specific incident where AI use has directly or indirectly caused harm, nor does it describe a specific event where harm has occurred or a near miss. Instead, it details regulatory and governance responses to potential harms from AI-generated and other misleading content, including platform actions and government enforcement plans. Therefore, this is best classified as Complementary Information, as it provides important context on societal and governance responses to AI-related content issues without reporting a new AI Incident or AI Hazard.
Thumbnail Image

中央网信办全面规范短视频内容标注

2026-03-21
news.cn
Why's our monitor labelling this an incident or hazard?
The article focuses on governance and regulatory responses to AI-generated and other manipulated short video content. It does not describe a specific AI Incident or AI Hazard causing or plausibly causing harm, but rather the regulatory measures and enforcement actions taken to address existing issues. Therefore, it fits the definition of Complementary Information as it provides updates on societal and governance responses to AI-related content management.
Thumbnail Image

中共强推短视频标注 收紧互联网言论空间 | 抖音快手 | 平台先行 | 大纪元

2026-03-23
The Epoch Times
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly through AI-generated content and AI-based content moderation on major platforms. The government's policy enforces mandatory labeling and penalizes mislabeling, directly impacting creators' freedom of expression and leading to account bans and detentions. These actions constitute violations of human rights and harm to communities by suppressing dissent and controlling information flow. The AI system's role in generating content and moderating it is pivotal to these harms. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

中国AI生成短影音乱窜 官方要求标记

2026-03-21
東方網 馬來西亞東方日報
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating short video content that misleads the public, which constitutes harm to communities (a form of harm under the AI Incident definition). However, the article focuses primarily on the government's regulatory measures and platform responses to address this harm rather than describing a new incident of harm itself. Since the main narrative is about societal and governance responses to an existing AI-related harm, this qualifies as Complementary Information rather than a new AI Incident or AI Hazard.
Thumbnail Image

多家企业回应给AI投毒问题|合规周报

2026-03-22
21jingji.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (large language models, AI chatbots, AI tools like OpenClaw) and their involvement in harmful activities such as AI data poisoning, generating false information, and malicious manipulation, which have already caused harm in the AI ecosystem. The exposure of these harms and the responses from companies and institutions confirm that harm has materialized. The article also includes governance and compliance updates, which are complementary information. Since the main harmful event is the AI data poisoning and its consequences, this qualifies as an AI Incident. Other parts of the article that discuss responses, research findings, and new product announcements are complementary information but do not overshadow the primary incident.
Thumbnail Image

肯尼亚近期遭受洪灾 "AI灾情"充斥社交网络

2026-03-22
news.cri.cn
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating synthetic videos that are falsely presented as real disaster footage, leading to misinformation and deception on social media. This misinformation can harm communities by causing panic, misallocation of aid, or erosion of trust in genuine information sources, which fits the harm to communities category. The AI system's use is central to the harm, as the videos are AI-generated and spread as if real. Hence, this is an AI Incident rather than a hazard or complementary information, as the harm is occurring through the dissemination of false AI-generated content about a real disaster.
Thumbnail Image

中央网信办:全面规范短视频内容标注工作

2026-03-21
上海热线
Why's our monitor labelling this an incident or hazard?
The article does not report a specific AI Incident or AI Hazard causing harm or posing an immediate risk. Instead, it details regulatory measures and platform responses aimed at mitigating misinformation and social disruption risks associated with AI-generated and other misleading short video content. These actions are complementary information that enhance understanding of ongoing efforts to manage AI-related risks in the digital ecosystem.
Thumbnail Image

肯尼亚近期遭受洪灾 "AI灾情"充斥社交网络

2026-03-22
中国经济网
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used to generate fake disaster videos, which are then spread on social media. While these videos are misleading and could potentially cause harm by spreading misinformation, the article does not document any realized harm such as injury, rights violations, or disruption caused by these AI-generated videos. The main focus is on raising awareness about the existence and spread of such AI-generated content and advising caution. Hence, it fits the definition of Complementary Information, providing supporting context about AI's role in misinformation during a disaster, rather than reporting a direct AI Incident or a plausible future hazard.
Thumbnail Image

多家企业回应给AI投毒问题|合规周报-证券之星

2026-03-22
wap.stockstar.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI data poisoning affecting large AI models, which is a direct harm to the AI ecosystem and can lead to misinformation, manipulation, and loss of trust, fitting the definition of an AI Incident (harm to communities and violation of obligations). The responses from companies and regulatory bodies are complementary information that contextualizes and addresses the incident. Other topics like security risks of OpenClaw and research on chatbots' mental health effects are either warnings or governance responses, not new incidents. Therefore, the primary classification is AI Incident due to the realized harm from AI data poisoning.
Thumbnail Image

短视频标注,让鱼目难混珠

2026-03-23
大洋网
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated content (AI digital humans, AI-synthesized videos) that have directly led to harm, including emotional and financial harm to an elderly person, misinformation causing public panic, and widespread misinformation disrupting social order. These constitute realized harms linked to the use and misuse of AI systems. Therefore, the event described qualifies as an AI Incident because the AI system's use has directly or indirectly caused harm to individuals and communities, including violations of trust and disruption of public order.
Thumbnail Image

明星苦AI视频久矣:被换脸、被代言、被明码标价_新闻

2026-03-22
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used to generate realistic fake videos of celebrities, which have directly led to harms including violations of personal rights (portrait and reputation), unauthorized commercial use, and potential misinformation. These harms fall under violations of human rights and intellectual property rights. The presence and misuse of AI systems for deepfake video generation is explicit, and the harms are realized and ongoing. Therefore, this qualifies as an AI Incident.
Thumbnail Image

AI生成等内容应标尽标

2026-03-21
新浪财经
Why's our monitor labelling this an incident or hazard?
The article focuses on the regulatory and governance measures taken by the Central Cyberspace Administration to address the inconsistent labeling of AI-generated and other types of short video content. It does not describe a specific AI system causing harm or a direct or indirect AI-related incident or hazard. Instead, it reports on societal and governance responses to AI-related content issues, including enforcement actions and platform compliance efforts. Therefore, it fits the definition of Complementary Information, as it provides important context and updates on responses to AI-related challenges but does not itself describe a new AI Incident or AI Hazard.
Thumbnail Image

中央网信办:短视频含AI生成等内容应标尽标

2026-03-21
新浪财经
Why's our monitor labelling this an incident or hazard?
The event involves AI-generated content in short videos and addresses the harm caused by unmarked AI-generated or misleading content that misleads public perception and disrupts social order, which constitutes harm to communities and the information environment. However, the article focuses on regulatory and platform responses to this issue rather than describing a specific AI Incident or a direct harm event. Therefore, it is best classified as Complementary Information, as it provides important context and updates on governance and mitigation efforts related to AI-generated content harms.
Thumbnail Image

百万播放视频全是假的!游戏录屏冒充实战画面,央视亲自下场锤死_手机网易网

2026-03-22
m.163.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly used to generate fake videos that have been widely viewed and shared, directly misleading the public about real-world conflict events. This misinformation harms communities by spreading false narratives and could escalate tensions or cause social disruption. The use of AI to create these videos is central to the harm, fulfilling the criteria for an AI Incident due to realized harm to communities and violations of rights to accurate information. The article documents actual harm caused by AI-generated content, not just potential harm or general AI news, so it is classified as an AI Incident.