Chinese Lawmakers Urge Stronger Measures Against AI-Generated Misinformation

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Chinese lawmakers and corporate executives warn of rampant AI-generated misinformation, including ChatGPT-fabricated false news, deepfake disaster videos causing public panic, consumer fraud and privacy breaches. They call for clearer laws, stronger platform oversight and advanced detection technologies to curb AI misuse and safeguard public trust and rights.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves AI systems generating realistic but false images and videos about earthquake disasters, which have been widely shared and believed, causing harm to communities by spreading misinformation. This fits the definition of an AI Incident because the AI system's use has directly led to harm to communities through the dissemination of false information. Additionally, the article includes societal responses such as calls for regulation and public education, but the primary focus is on the realized harm caused by AI-generated fake content, not just potential harm or complementary information.[AI generated]
AI principles
AccountabilityPrivacy & data governanceRespect of human rightsRobustness & digital securitySafetyTransparency & explainabilityDemocracy & human autonomyHuman wellbeing

Industries
Media, social platforms, and marketingDigital securityGovernment, security, and defenceConsumer servicesIT infrastructure and hosting

Affected stakeholders
ConsumersGeneral public

Harm types
PsychologicalEconomic/PropertyReputationalPublic interestHuman or fundamental rights

Severity
AI incident

AI system task:
Content generationInteraction support/chatbots

In other databases

Articles about this incident or hazard

Thumbnail Image

全国政协委员、中国民办教育协会副会长李孝轩:建议从快从重打击AI违法犯罪

2025-03-05
新浪财经
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions harms caused by AI misuse, including fraud, fake news, and privacy violations, which are direct harms linked to AI systems. However, it does not describe a specific incident but rather addresses the broader issue and calls for stronger measures. Therefore, it is best classified as Complementary Information, as it provides context and governance-related responses to existing AI-related harms without detailing a particular AI Incident or Hazard.
Thumbnail Image

关于地震的谣言如何识破?5招教你甄别AI图片、视频

2025-03-07
新浪财经
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating realistic but false images and videos about earthquake disasters, which have been widely shared and believed, causing harm to communities by spreading misinformation. This fits the definition of an AI Incident because the AI system's use has directly led to harm to communities through the dissemination of false information. Additionally, the article includes societal responses such as calls for regulation and public education, but the primary focus is on the realized harm caused by AI-generated fake content, not just potential harm or complementary information.
Thumbnail Image

剥夺真相是一种犯罪-钛媒体官方网站

2025-03-05
tmtpost.com
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems generating fake videos, mass-produced false posts, and AI-assisted manipulation of public opinion, which have directly led to harm by spreading misinformation and undermining truth. The harms include damage to companies' reputations, public confusion, and societal disruption, which fall under harm to communities and violation of rights to truthful information. The AI systems' use in these malicious activities is central to the harm described, meeting the criteria for an AI Incident. Although the article also discusses potential future risks and governance responses, the primary focus is on ongoing, realized harms caused by AI misuse in content generation and dissemination.
Thumbnail Image

要发挥每个人的积极性,引导公众主动举报线索,让虚假信息无处遁形。要开展法律知识普及活动,提高公众对AI造谣行为的法律认识,引导公众运用法治手段维护自身合法权益,共建良好网络生态。

2025-03-04
南方网
Why's our monitor labelling this an incident or hazard?
The article clearly describes realized harms caused by AI systems generating false information that misleads the public and disrupts social order, which fits the definition of an AI Incident due to harm to communities and violation of rights. It references specific incidents where AI-generated misinformation caused negative social consequences, thus meeting the criteria for an AI Incident rather than a hazard or complementary information. The focus is on actual harms and ongoing incidents, not just potential risks or responses.
Thumbnail Image

5招教你识破AI生成的谣言 警惕地震虚假信息

2025-03-08
中华网科技公司
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used to create realistic but false disaster images and videos, which have been disseminated to misinform the public, constituting harm to communities through misinformation. This harm has already occurred as the false information caused public confusion and distress. Therefore, this qualifies as an AI Incident. Additionally, the article includes complementary information about societal and legislative responses, but the primary focus is on the realized harm caused by AI-generated misinformation.
Thumbnail Image

代表委员建言加强AI虚假信息治理

2025-03-05
法制日报
Why's our monitor labelling this an incident or hazard?
The article explicitly describes realized harms caused by AI systems: false news generated by ChatGPT leading to profit from deception, AI-generated videos causing social panic, and misuse of deepfake technology resulting in violations of personal rights and financial harm. These constitute direct harms to individuals and communities, fitting the definition of an AI Incident. While the article also discusses governance and legislative responses, the primary focus is on the harms already occurring due to AI misuse and the urgent need to address them. Therefore, the event is best classified as an AI Incident.
Thumbnail Image

两会时间|聚焦大模型安全 直指AI滥用乱象 多位代表、委员建言献策

2025-03-04
新浪财经
Why's our monitor labelling this an incident or hazard?
The article explicitly describes realized harms caused by AI systems, such as AI-generated false news and videos misleading the public and causing financial and reputational damage, which constitute harm to communities and property. It also mentions violations of rights (copyright, privacy) and security risks from AI model vulnerabilities. These are direct or indirect harms linked to AI system use and misuse, meeting the criteria for an AI Incident. Although the article also discusses governance and mitigation efforts, the primary focus is on the harms and risks already manifesting, not just potential future harm or complementary information.
Thumbnail Image

张凯丽:建议加大AI"魔改"经典影视整治力度 健全平台内容审核机制|代表委员在这里

2025-03-08
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used to create altered versions of classic films without authorization, causing direct harm to copyright holders and the cultural community by spreading misleading and inappropriate content. The harm is realized and ongoing, including violations of intellectual property rights and damage to the reputation of original works. The article's focus on the negative impacts and calls for regulatory action confirms the presence of an AI Incident rather than a mere hazard or complementary information. Therefore, the classification is AI Incident.
Thumbnail Image

微博:清理"首例智能驾驶致死案宣判"等相关违规内容7372条,处置账号46个

2025-02-20
东方财富网
Why's our monitor labelling this an incident or hazard?
The event involves AI systems indirectly through the mention of 'intelligent driving' related rumors, but the main focus is on the platform's removal of misinformation and account suspensions. There is no direct or indirect harm caused by the AI system described, nor a plausible future harm from the event itself. Therefore, it does not qualify as an AI Incident or AI Hazard. Instead, it is Complementary Information about societal and governance responses to AI-related misinformation.
Thumbnail Image

网传"某地成诺如病毒重灾区"?官方辟谣

2025-02-22
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The article involves AI-generated content used to spread false information, which is a concern related to AI misuse. However, the main focus is on the misinformation and the official responses to it, including investigations and content removal. There is no direct or indirect harm caused by the AI system itself described here, nor is there a plausible future harm from the AI system's use beyond the misinformation context already addressed. Therefore, this is best classified as Complementary Information, as it provides context on societal and governance responses to AI-generated misinformation rather than reporting a new AI Incident or AI Hazard.
Thumbnail Image

让"AI治理"跑赢"AI谣言

2025-02-21
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI-generated rumors that have caused real harm, such as economic disruption, damage to corporate reputations, and misinformation affecting public perception. These harms fall under violations of rights and harm to communities. The AI systems involved are generative AI technologies producing false text, images, and videos. Since the harms are occurring and the article discusses legal actions and governance responses, this qualifies as an AI Incident. The article also includes complementary information about governance and mitigation efforts, but the primary focus is on the realized harms caused by AI-generated misinformation.
Thumbnail Image

AI应成谣言"粉碎机"而非"制造机

2025-02-20
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to generate and disseminate false disaster information, which has directly led to social harm by causing public panic and disrupting the information ecosystem. The article explicitly states that AI-generated rumors have been spread and that authorities have taken action against perpetrators. This meets the criteria for an AI Incident because the AI system's use has directly caused harm to communities and disrupted social order. The article also discusses potential governance and technical responses, but the primary focus is on the realized harm from AI misuse, not just potential or complementary information.
Thumbnail Image

当"AI谣言"可一键生成,治理该何去何从?__南方+_南方plus

2025-02-21
static.nfnews.com
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems in the generation of misinformation, which has caused real harm to communities and enterprises, thus fitting the definition of AI-related harm. However, the article focuses on the general phenomenon, its challenges, and governance responses rather than reporting a specific incident or hazard event. Therefore, it is best classified as Complementary Information, as it provides context, updates on societal and governance responses, and insights into the evolving AI ecosystem related to misinformation.