China Removes 98,000 Accounts for Unlabeled AI-Generated Content

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Chinese authorities removed over 98,000 social media accounts for publishing AI-generated videos and other content without proper labeling, misleading the public and blurring the line between reality and fiction. The lack of clear AI-generated content tags contributed to misinformation and harmed public understanding, prompting regulatory intervention.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves AI systems in the form of AI-generated videos that were published without proper AI-generated content labels, misleading users about the nature of the content. This misuse of AI-generated content has led to harm by misleading the public and damaging the network ecology, which qualifies as harm to communities. The regulatory actions and platform requirements are responses to this harm. Since the harm has already occurred through misleading content dissemination, this qualifies as an AI Incident rather than a hazard or complementary information.[AI generated]
AI principles
Transparency & explainabilityAccountability

Industries
Media, social platforms, and marketing

Affected stakeholders
General public

Harm types
Public interest

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

2026-05-03
guancha.cn
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in the form of AI-generated videos that were published without proper AI-generated content labels, misleading users about the nature of the content. This misuse of AI-generated content has led to harm by misleading the public and damaging the network ecology, which qualifies as harm to communities. The regulatory actions and platform requirements are responses to this harm. Since the harm has already occurred through misleading content dissemination, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

网信部门严管“自媒体”未规范标注信息来源行为

2026-05-03
news.cn
Why's our monitor labelling this an incident or hazard?
The event involves AI-generated content that is not properly labeled, which misleads users about the nature of the content (virtual vs. real). This relates to the use of AI systems in content creation and the failure to disclose AI involvement, which can cause harm by misleading the public and damaging the information environment (harm to communities). However, the event focuses on regulatory enforcement and policy measures rather than a specific incident of harm caused by AI malfunction or misuse. There is no direct or indirect harm from a particular AI system malfunction or misuse described, but rather a systemic issue of non-compliance and misinformation risk. Therefore, this is best classified as Complementary Information, as it provides context on governance responses and enforcement actions addressing AI-related misinformation and labeling issues.
Thumbnail Image

未规范标注信息来源 近10万自媒体账号被处置

2026-05-03
早报
Why's our monitor labelling this an incident or hazard?
The article reports on the official crackdown on self-media accounts that fail to label AI-generated content and other information sources properly, leading to public misinformation and social harm. While AI-generated content is involved, the main focus is on the regulatory and governance response to these issues, not on a specific AI system causing direct or indirect harm through malfunction or misuse. The harm described is societal and related to misinformation, but the AI's role is part of a broader content labeling and misinformation problem. This fits the definition of Complementary Information, as it updates on governance measures and societal responses to AI-related content issues rather than reporting a distinct AI Incident or Hazard.
Thumbnail Image

网信部门严管“自媒体”未规范标注信息来...

2026-05-03
China News
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in the form of AI-generated videos that were published without proper AI-generated content labeling, which misleads the public and blurs the line between reality and fiction. This misuse of AI-generated content has directly led to harm by misleading the public and damaging the online information environment, which qualifies as harm to communities and a violation of rights to accurate information. The regulatory response and enforcement actions are described, but the main focus is on the incident of harm caused by the misuse and lack of transparency of AI-generated content and other information. Therefore, this qualifies as an AI Incident.
Thumbnail Image

网信部门严管“自媒体”未规范标注信息来源行为

2026-05-03
bjnews.com.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated content being published without proper labeling, which can mislead the public and blur the line between virtual and real. This involves AI systems in content creation. However, the main focus is on the regulatory crackdown and platform enforcement to correct these behaviors, not on a specific AI Incident causing direct or indirect harm. The harms are potential or ongoing societal harms addressed through governance measures. Thus, it fits the definition of Complementary Information, as it details societal and governance responses to AI-related challenges rather than reporting a new AI Incident or Hazard.
Thumbnail Image

处置违规账号9.8万余个 网信部门严管"自媒体"未规范标注信息来源行为

2026-05-04
人民网
Why's our monitor labelling this an incident or hazard?
The event involves AI-generated content labeling, indicating the presence of AI systems generating content. However, the article does not describe a specific incident where the AI system's use directly or indirectly caused harm such as injury, rights violations, or property/community harm. Instead, it reports on regulatory enforcement and preventive measures to ensure compliance and improve transparency. Therefore, this is not an AI Incident or AI Hazard but rather a governance and societal response to AI-related content issues, fitting the definition of Complementary Information.
Thumbnail Image

超9.8万个账号被处置!

2026-05-03
东方财富网
Why's our monitor labelling this an incident or hazard?
The event involves AI-generated content labeling, indicating the presence of AI systems generating content. However, the main focus is on the regulatory response to previously existing issues of misinformation and improper labeling, rather than a new incident of harm caused directly by AI systems. There is no direct or indirect harm caused by AI system malfunction or misuse described here; rather, it is a governance and compliance action to prevent misinformation and improve transparency. Therefore, this is Complementary Information about societal and governance responses to AI-related content issues.
Thumbnail Image

处置违规账号9.8万余个!网信部门严管“自媒体”未规范标注信息来源行为网信中国微信公众号2026-5-3

2026-05-03
xkb.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI technology by some 'self-media' accounts to generate videos without proper AI-generated content labeling, which misleads the public about the nature of the content. This misuse of AI-generated content contributes to misinformation and harms the community by blurring the line between reality and fiction. The regulatory response is aimed at mitigating these harms. Since the AI system's use has directly led to harm to communities through misinformation and deception, this qualifies as an AI Incident.
Thumbnail Image

一些"自媒体"账号在发布涉时政等领域信息时,未规范标注信息来源,误导公众认知

2026-05-03
app.myzaker.com
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI technology to generate videos that are not properly labeled, misleading the public about the authenticity of the content. This misuse of AI-generated content contributes to misinformation and harms public understanding, which is a form of harm to communities. The regulatory response and account removals confirm that harm has occurred and is recognized. Hence, the event meets the criteria for an AI Incident because the AI system's use has directly led to harm through misinformation and deception.
Thumbnail Image

未规范标注信息来源!超9.8万个账号被处置-证券之星

2026-05-03
wap.stockstar.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in the form of AI-generated videos that were published without proper AI-generated content labels, misleading users about the nature of the content. This misuse of AI-generated content has directly led to harm by misleading the public and damaging the online information environment, which constitutes harm to communities. The regulatory actions and account removals are responses to this harm. Therefore, this qualifies as an AI Incident because the AI system's use (AI-generated content without proper labeling) has directly led to harm through misinformation and public deception.
Thumbnail Image

网信部门严管“自媒体”未规范标注信息来源行为

2026-05-03
金羊网
Why's our monitor labelling this an incident or hazard?
The article involves AI-generated content labeling, indicating the presence of AI systems in content creation. However, the event focuses on regulatory enforcement and platform self-inspection to correct and prevent misinformation, rather than describing a specific incident of harm caused by AI or a direct malfunction. There is no report of realized harm or a specific incident caused by AI misuse or failure, nor is there a direct indication of plausible future harm from AI systems themselves. Instead, it is a governance and compliance update addressing past and potential issues, which fits the definition of Complementary Information.
Thumbnail Image

未标注AI生成,摆拍婆媳冲突、代际冲突......超9.8万个账号被处置

2026-05-03
新浪财经
Why's our monitor labelling this an incident or hazard?
The event involves AI systems indirectly because the accounts failed to label AI-generated content, which misled the public and harmed the online community. This constitutes a violation of rights related to truthful information and harms communities by spreading misinformation. Since the harm has already occurred and the AI system's misuse is a contributing factor, this qualifies as an AI Incident.