Chinese Police Crack Down on AI-Generated Misinformation Online

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

In Gansu and Henan, China, multiple individuals used AI tools to fabricate and spread false videos and information online, including fake war reports and disaster news, misleading the public and disrupting social order. Police intervened, issuing warnings, deleting content, and imposing administrative penalties to curb AI-driven misinformation.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event explicitly mentions the use of AI software to fabricate false information that was spread online, causing social harm by misleading the public and disrupting public order. The harm has already occurred as the misinformation was disseminated and led to police action. Therefore, this qualifies as an AI Incident because the AI system's use directly led to violations of law and harm to communities through misinformation.[AI generated]
AI principles
Transparency & explainabilityDemocracy & human autonomy

Industries
Media, social platforms, and marketing

Affected stakeholders
General public

Harm types
Public interest

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

河南警方公布十起打击网络谣言典型案例 多为吸粉引流

2026-03-16
China News
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI software to fabricate false information that was spread online, causing social harm by misleading the public and disrupting public order. The harm has already occurred as the misinformation was disseminated and led to police action. Therefore, this qualifies as an AI Incident because the AI system's use directly led to violations of law and harm to communities through misinformation.
Thumbnail Image

甘肃参战?博主利用AI制作发布不实信息被约谈

2026-03-16
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly used to generate false videos and misinformation, which caused harm to communities by misleading public perception and disrupting social order. The authorities' response confirms the harm occurred and the AI's pivotal role in causing it. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use directly led to violations of laws and harm to communities.
Thumbnail Image

博主发布"甘肃参战"谣言被约谈 AI造谣乱象整治

2026-03-16
中华网科技公司
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used to generate false videos and misinformation, which were disseminated and caused social harm by misleading public perception and disturbing social order. The authorities' intervention confirms the harm occurred and the AI system's role in causing it. Therefore, this is an AI Incident as the AI-generated misinformation has directly led to harm to communities and breaches of legal obligations.
Thumbnail Image

博主发布"甘肃参战"谣言被约谈,公安机关:谣言系用AI制作发布

2026-03-16
金羊网
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to maliciously generate and spread false content, which has directly led to harm by misleading the public and disturbing social order, constituting harm to communities. The AI system's use in creating and disseminating misinformation is central to the incident. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to violations of laws and social harm.
Thumbnail Image

抖音平台博主利用AI制作发布"甘肃参战"等不实信息被公安机关约谈

2026-03-16
py.qianlong.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly used to generate false videos that have already caused social harm by misleading the public and disturbing the network environment. This constitutes a violation of laws and harms communities through misinformation. Therefore, it meets the criteria of an AI Incident, as the AI system's use directly led to harm to communities and a breach of legal obligations.
Thumbnail Image

萍乡上栗发生爆炸2死2伤?网警辟谣:用AI生成的虚假视频

2026-03-17
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly used to generate a fake video depicting a harmful event (explosion causing deaths and injuries) that did not actually occur. This false information was spread publicly, misleading people and disrupting public order, which constitutes harm to communities. Therefore, this qualifies as an AI Incident because the AI-generated content directly led to social harm through misinformation and public disturbance.
Thumbnail Image

北京启动"清朗京华·AI向善"专项行动 重点整治五类涉AI领域网络乱象

2026-03-17
m.21jingji.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems, particularly AI-generated synthetic content and deepfake technologies, which have been used to produce harmful and illegal content. The harms targeted include violations of human rights (e.g., unauthorized use of individuals' likenesses), harm to communities (spread of false and malicious information), and harm to minors (exposure to inappropriate content). Since the event focuses on ongoing misuse and harm caused by AI-generated content, it qualifies as addressing AI Incidents. The initiative is a response to existing harms caused by AI misuse rather than a future risk or a general update, so it is classified as an AI Incident.
Thumbnail Image

北京启动"清朗京华·AI向善"专项行动,重点整治五类涉AI领域网络乱象

2026-03-17
新浪财经
Why's our monitor labelling this an incident or hazard?
The event involves the use and misuse of AI systems generating harmful content and deepfakes, which have already caused or are causing harm such as violations of rights, misinformation, and harm to minors. The initiative is a governance and enforcement response to these harms, aiming to mitigate and prevent further incidents. Since the article primarily reports on the launch of a regulatory and enforcement action addressing existing AI-related harms and promoting responsible AI use, it constitutes Complementary Information rather than a new AI Incident or AI Hazard. The harms are recognized and ongoing, but the article focuses on the response rather than describing a new incident or hazard itself.
Thumbnail Image

北京启动专项行动,重点整治五类涉AI领域网络乱象

2026-03-17
news.bjd.com.cn
Why's our monitor labelling this an incident or hazard?
The article outlines a proactive regulatory action to address and prevent various harms caused by AI-generated content, such as misinformation, privacy violations, and harmful content dissemination. While these harms are recognized as significant, the article does not describe a specific AI Incident where harm has already occurred due to AI system malfunction or misuse. Rather, it is a governance and enforcement response to ongoing AI-related risks and misuse, aiming to prevent or reduce harm. Therefore, this event fits best as Complementary Information, providing context on societal and governance responses to AI-related challenges.
Thumbnail Image

网传"2026年甘肃省考标准答案"?真相是......

2026-03-19
py.qianlong.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system to generate false information that misled the public and caused social harm. The dissemination of fabricated exam answers is a clear violation of information integrity and disrupts social order, constituting harm to communities. The authorities' intervention and removal of the content do not negate the fact that harm occurred. Therefore, this event meets the criteria for an AI Incident due to the direct harm caused by the AI-generated misinformation.
Thumbnail Image

"2026年甘肃省考标准答案"不实!天水一网民虚构视频被警方约谈

2026-03-18
金羊网
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly used to generate false video content that led to harm in the form of social disruption and misinformation, which qualifies as harm to communities. The event involves the use of AI in a way that directly caused this harm. Therefore, this qualifies as an AI Incident because the AI system's use directly led to a violation of social order and dissemination of false information causing harm.
Thumbnail Image

利用AI制作不实灾害信息,2名网民被约谈

2026-03-19
新浪财经
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated false videos about disasters, which were posted online and received significant interaction, indicating actual dissemination of misinformation. This misinformation can harm communities by causing panic, confusion, or mistrust, which is a recognized form of harm under the AI Incident definition. The police intervention and legal framework cited confirm the recognition of harm caused. Therefore, this qualifies as an AI Incident due to the direct use of AI to create and spread harmful false information.
Thumbnail Image

编造重庆南岸立交垮塌谣言,网民廖某被警方依法行政拘留 - 欧洲头条

2026-03-22
xinouzhou.com
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly used to create fabricated video content that falsely showed a bridge collapse, which is a clear example of AI-generated misinformation causing harm to communities by spreading panic and false information. The event describes realized harm (social disruption and misinformation) caused by the AI-generated content, meeting the criteria for an AI Incident. The legal response and administrative detention further confirm the seriousness of the harm caused by the AI system's misuse.