AI-Generated Misinformation and Content Manipulation Spark Regulatory Crackdown in China

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

In China, misuse of generative AI technologies has led to widespread dissemination of false, misleading, and low-quality content, harming consumer trust and market fairness. Authorities and industry groups, including the China Advertising Association and platforms like Xiaohongshu, are launching standardization and enforcement actions to combat AI-driven misinformation and restore integrity.[AI generated]

Why's our monitor labelling this an incident or hazard?

The GEO service explicitly involves AI systems by influencing AI data sources to affect AI recommendations. The use of such services leads to misleading AI outputs, which constitutes false advertising and unfair competition, harming consumers and market fairness. These harms fall under violations of legal obligations and harm to communities. The article reports that these practices are ongoing and have caused controversy, indicating realized harm rather than just potential risk. Therefore, this event meets the criteria for an AI Incident.[AI generated]
AI principles
AccountabilityTransparency & explainability

Industries
Media, social platforms, and marketing

Affected stakeholders
ConsumersBusiness

Harm types
ReputationalEconomic/Property

Severity
AI incident

Business function:
Marketing and advertisement

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

AI日报:腾讯发布"龙虾管家";抖音处置AI擦边违规账号1.4万个;有赞回应315AI投毒传闻

2026-03-18
chinaz.com
Why's our monitor labelling this an incident or hazard?
The article mentions AI systems and their applications, but none of the described events involve direct or indirect harm caused by AI systems, nor do they describe plausible future harm from AI system development or use. The enforcement actions against AI-generated inappropriate content are reported as completed, but no specific incident of harm is detailed beyond the enforcement itself, which is a governance response. The legal compliance challenges and product launches are typical updates without new harm or hazard. Therefore, the article is best classified as Complementary Information, providing context and updates on AI developments and governance without reporting new AI Incidents or Hazards.
Thumbnail Image

大模型被"投毒"引争议,GEO服务在网络交易平台搜不到了?

2026-03-18
China News
Why's our monitor labelling this an incident or hazard?
The GEO service explicitly involves AI systems by influencing AI data sources to affect AI recommendations. The use of such services leads to misleading AI outputs, which constitutes false advertising and unfair competition, harming consumers and market fairness. These harms fall under violations of legal obligations and harm to communities. The article reports that these practices are ongoing and have caused controversy, indicating realized harm rather than just potential risk. Therefore, this event meets the criteria for an AI Incident.
Thumbnail Image

面对数据污染更要多几分"信息免疫力"

2026-03-18
China News
Why's our monitor labelling this an incident or hazard?
The event involves the use and development of AI systems (large language models) whose training data is deliberately polluted with false information by malicious actors. This manipulation directly leads to AI-generated content that misleads users and promotes counterfeit products, causing harm to communities and violating users' rights to truthful information. The article describes ongoing harm rather than just potential risk, qualifying it as an AI Incident. The AI system's outputs are directly influenced by the polluted data, resulting in harmful consequences. Therefore, this is an AI Incident involving indirect harm caused by malicious data poisoning of AI training sets.
Thumbnail Image

起底GEO灰色产业链!9.9元就能"投毒"AI大模型?

2026-03-18
21jingji.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (large language models and AI search engines) being manipulated through the use of the GEO software tool, which systematically injects biased and false information into the AI training and retrieval data. This manipulation directly leads to harms including misinformation, economic losses, violation of users' rights to truthful information, and degradation of AI model quality. The article documents realized harms such as misleading AI answers, economic damage to brands, and loss of public trust, fulfilling the criteria for an AI Incident. The involvement is through the use and misuse of AI systems and their data sources, causing direct and indirect harm to individuals, communities, and the AI ecosystem.
Thumbnail Image

【钛晨报】北京启动专项行动,重点整治五类涉AI领域网络乱象;国家发展改革委推出新一批重大外资项目,计划投资额134亿美元;腾讯QClaw即将上线,微信入口全面升级-钛媒体官方网站

2026-03-17
tmtpost.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems in the context of misuse and harmful content generation, such as AI-generated synthetic pornography, deepfake impersonations, and AI-generated misinformation, which are recognized harms to communities and individuals. However, the article describes a government-led regulatory campaign to address these issues rather than reporting on a specific AI incident where harm has already occurred or a hazard where harm is imminent or plausible. The campaign aims to prevent and mitigate AI-related harms by enforcing laws and improving platform oversight. Therefore, the event is best classified as Complementary Information, as it details societal and governance responses to AI-related harms and risks, enhancing understanding of the AI ecosystem and ongoing mitigation efforts.
Thumbnail Image

中广协启动生成式引擎优化(GEO)领域标准化建设,共建发展新生态

2026-03-18
光明网
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly, specifically generative AI content creation and optimization technologies. The harms described include misinformation dissemination, consumer deception, and damage to brand reputation and market fairness, which fall under harm to communities and violations of rights. These harms are already occurring as evidenced by the '3·15' evening show exposure and societal concern. The article primarily reports on the launch of a standardization initiative as a response to these harms, aiming to mitigate and prevent further incidents. Since the main focus is on the governance and mitigation response to an existing AI Incident, this article is best classified as Complementary Information rather than a new AI Incident or AI Hazard.
Thumbnail Image

中广协全面启动GEO标准化建设工作 聚焦行业全链路合规标准制定

2026-03-18
东方财富网
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (generative AI technologies used in GEO) that have already caused harms including misinformation, consumer deception, and damage to brand reputation and market fairness. These harms fall under violations of rights and harm to communities. The article reports on the response to these harms through standardization efforts, but the primary focus is on the ongoing or realized harms caused by AI-generated misinformation and the industry's disorder. Therefore, this qualifies as an AI Incident because the AI system's use has directly or indirectly led to significant harms, and the article discusses measures to address these harms.
Thumbnail Image

警惕"AI托管"侵蚀互联网内容生态

2026-03-15
东方财富网
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to generate and manage fake content and interactions on online platforms, which has directly led to harm including misinformation spread, violation of platform rules, erosion of user trust, and disruption of fair content creation ecosystems. These harms correspond to damage to communities and violation of rights related to truthful information and fair competition. Since the harm is occurring and the AI system's misuse is central to the problem, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

小红书"葬"AI

2026-03-15
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems used to autonomously generate content and simulate human interactions on Xiaohongshu, which is a clear AI system involvement. The misuse of these AI systems has directly led to harms including violation of community trust, harm to the community environment through spread of false or low-quality content, and indirect harm to commercial interests and user safety (e.g., users being misled to buy counterfeit or harmful products). These harms fall under harm to communities and harm to property/consumers. Therefore, this event qualifies as an AI Incident because the AI system's use has directly caused significant harm to the platform's community and commercial ecosystem.
Thumbnail Image

自媒体AI生成虚假政务场景被约谈 网络乱象整治进行中

2026-03-18
中华网科技公司
Why's our monitor labelling this an incident or hazard?
An AI system is involved as AI technology was used to generate false images and videos. The use of AI-generated false information constitutes a violation of rights and harms the community by spreading misinformation. However, the article notes that no substantial harm has yet materialized, and the authorities are taking preventive measures. Therefore, this event represents a plausible risk of harm due to AI-generated misinformation but does not describe realized harm. Hence, it qualifies as an AI Hazard rather than an AI Incident.
Thumbnail Image

"2026年甘肃省考标准答案"不实 AI造谣被严处

2026-03-18
中华网科技公司
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system to generate false content that was widely disseminated and caused social harm by disrupting the normal information environment. The harm is realized (not just potential), as the false video attracted significant attention and negatively impacted social order. The AI system's use directly led to the harm, fulfilling the criteria for an AI Incident under the definitions provided. The law enforcement response and deletion of the content are complementary actions but do not change the classification of the original event as an incident.
Thumbnail Image

北京启动专项行动 重点整治五类涉AI领域网络乱象

2026-03-18
千龙网
Why's our monitor labelling this an incident or hazard?
The article details a regulatory action aimed at addressing and preventing various harms caused by AI-generated content, including violations of rights, misinformation, and harmful content dissemination. However, it does not describe a particular incident where harm has already occurred or a specific hazard event with plausible imminent harm. Instead, it focuses on the response measures, enforcement plans, and platform obligations to mitigate AI misuse. Therefore, it fits the definition of Complementary Information as it provides context and governance response to AI-related harms rather than reporting a new incident or hazard.
Thumbnail Image

IDC数据显示,2025年中国AI搜索月活用户已超6亿,超六成企业级用户在决策前优先通过AI问答平台获取供应商信息。中国信通院数据显示,2024年中国GEO服务市场规模已超过42亿元人民币,年复合增长率达到38%。据弗若斯特沙利文预测,2026年这一规模将突破180亿元,年复合增长率超45%。然而,央视"3・15"晚会揭露的AI(人工智能)"投毒"黑产,撕开了生成式AI商业化进程中的灰色地带。当公众对AI建立起信任时,一些GEO(生成式引擎优化)服务商却通过系统性"投喂"虚假信息,让AI沦为商家的"营销傀儡",肆意染污用户的AI与搜索生态,对用户形成误导从而谋取利益。系统、有效治理AI被"投毒"灰产,已经刻不容缓。一问:AI为何成为"投喂"虚假信息的重点对象?根源在于其作为新兴信息入口的权威性与技术漏洞的双重性。一方面,AI大模型被公众视为客观理性的"知识共识"来源,用户对其输出内容存在天然信任;另一方面,AI的"幻觉"缺陷和数据依赖特性使其易被操控。大模型基于概率生成内容,当训练数据缺失时会自动编造"合理"细节,而GEO灰产正是利用这一漏洞,通过批量投喂虚假信息污染训练数据,使AI将谎言内化为"标准答案"。 例如,2025年西藏定日县地震中,AI生成的"小男孩被埋"图因六根手指暴露破绽,但更隐蔽的虚假信息已能通过交叉验证机制欺骗AI,即当多个账号发布相同虚假内容时,AI会误判为"事实"并优先推荐。攻击者只需通过伪造数据、篡改标注等手段,就能在模型训练阶段植入"污染源",让AI在潜移默化中吸收错误认知,后续输出的内容自然偏离客观事实。更关键的是,AI具备规模化、自动化的信息传播能力,一条被"投毒"的AI生成内容,可在短时间内借助各类平台触达海量用户,其虚假信息的传播效率与覆盖范围远超传统人工传播,这种传播效能被灰产盯上后,便成为低成本制造舆论、误导公众的高效工具。二问:谁为GEO灰产提供了可乘之机?商业利益驱动是AI被"投毒"灰产乱象背后的核心引擎。流量是互联网商业模式的血液,AI正在成为流量分配的新入口。当用户习惯了向AI提问并直接采纳其建议时,谁能影响AI的回答,谁就能掌握流量变现的密码。对于商家而言,出现在AI回答中的推荐链接或品牌名称,意味着巨大的曝光量和潜在客户。这种极致的流量追逐,使得原本应用于合规营销的SEO技术发生了异化。灰产团伙发现,与其费尽心力优化内容质量,不如直接伪造内容欺骗AI来得高效低廉。一条完整的黑色产业链迅速形成:上游是有着迫切推广需求的商家,中游是编写恶意指令、批量生成虚假帖子的技术团队,下游则是负责分发、刷量以增加权重的"水军"。灰产团伙利用AI技术来对抗AI,通过自动化脚本生成数以万计的看似合理的虚假测评、虚构的新闻报道,将真实信息淹没在噪音之中。这种"用魔法打败魔法"的行径,不仅极大地降低了营销成本,更能在短时间内迅速占领AI的"心智",实现收割流量的目的。另外,不少中小商家为降低获客成本,主动寻求GEO灰色服务,将AI推荐视为低成本流量入口,2025年电商、本地生活领域的GEO灰色服务订单量同比增长190%,旺盛的市场需求也直接推动了灰产链条的快速扩张。三问:为什么虚假信息的技术甄别和清除难度大?技术层面的甄别难题,为GEO灰产的泛滥提供了发育和成长的条件。当前的AI算法在处理高仿真虚假信息时,仍面临着巨大的挑战。随着生成式AI技术的普及,制造虚假内容的门槛被无限拉低,灰产人员同样可以使用大模型生成逻辑通顺、文风自然的伪原创内容。这些由机器生成的文本,在语言特征上与人类真实表达几乎无异,甚至在逻辑严密性上更胜一筹。对于依靠概率统计来判断内容质量的AI模型而言,很难分辨一段文字是人类真实经历的记录,还是机器生成的谎言。此外,多模态技术的发展更让"有图有真相"成为了历史,伪造的图片、视频配合文字,能够构建出极具欺骗性的证据链,让AI在判断时彻底迷失。算法在处理这类复合型、高仿真的虚假信息攻击时,往往显得"力不从心"。现有的过滤机制大多基于已有的违规词库或明显的逻辑漏洞,而对于这种基于特定目的、经过精心包装的GEO"投毒"内容,技术防御体系尚显得脆弱且滞后。算法"黑箱"特性加剧了问题的隐蔽性。多数商业AI系统并不公开其训练数据构成或权重分配逻辑,外界难以追溯某条错误输出的根源。当用户发现某地餐馆在AI推荐中异常突出,却无法判断这是源于真实口碑还是人为刷量?这种不可解释性不仅削弱了公众监督能力,也为灰产提供了天然的"保护伞"。即便平台事后察觉异常,也常因缺乏有效溯源手段而只能被动清理表面内容,难以根除背后的操纵链条。四问:为什么AI"投毒"灰产长期处于监管盲区?面对如此严峻的形势,这一乱象却长期处于被忽视的边缘地带,其原因值得深思。技术发展的不确定性在客观上遮蔽了问题的严重性。正如上文所说,大模型的"黑箱"特性,使得其生成错误答案的过程难以追溯。当AI一本正经地胡说八道时,普通用户往往难以在第一时间察觉,甚至可能因为AI"自信"的语气而产生信任。这种现象被业界称为"幻觉",但GEO"投毒"造成的危害远甚于技术性的幻觉,它是人为制造的定向误导。由于缺乏有效的溯源机制,受害者即便发现被误导,也往往投诉无门,难以界定是技术故障还是人为作恶。虚假信息从注入内容池到被AI抓取训练、最终呈现给用户,整个传导链条最短需要15天,最长可达3个月,危害爆发时很难追溯到最初的信息投放主体。现有监管规则更多聚焦于AI服务提供者的主体责任,针对GEO灰产的投放行为、中介服务的定性和处罚标准尚未明确。尽管《生成式人工智能服务管理暂行办法》已对AI服务提供者的训练数据质量责任作出规定,但对GEO这类新型灰产的网络平台"投毒"行为,相关规制仍存在空白。有关专家建议将"针对AI系统的恶意信息投喂操纵行为"明确纳入《反不正当竞争法》规制范畴,打通从GEO服务商到委托商家的全链条追责路径。压实平台主体责任同样重要。AI服务提供方应建立信源数据分级可信度评估机制,对检索来源实施黑白名单管理或者采纳权重升降级机制,从技术上提升"中毒"门槛;平台也应进行异常流量监测,对不规范或恶意矩阵账号进行监控和治理。五问:如何应对AI"投毒"灰产的系统性染污?更值得警惕的是,这类"投毒"行为正在从单点攻击演变为系统性污染。早期灰产多聚焦于单一平台或特定场景,如今则呈现出跨平台协同、数据复用的趋势。一段伪造的本地评论可能同时被用于训练搜索引擎、语音助手和智能客服等多个AI系统,形成连锁污染效应。由于各平台间缺乏数据共享与风险联防机制,一处漏洞可能引发整个信息生态的连锁失真。AI系统的数据饥渴与GEO业务的商业逻辑在此交汇,共同构筑了一个高回报、低风险的灰色温床。技术设计者追求效率与规模,平台运营方追逐流量与转化,而监管体系尚未跟上技术迭代的速度,多重因素叠加,使得虚假信息投喂不仅可行,甚至成为某些细分市场中的"标准操作"。若不及时干预,这种扭曲的信息输入将持续侵蚀AI系统的可信度。跨部门协作是系统治理乱象的关键。监管框架必须打破"头痛医头"的碎片化思路,转向覆盖全生命周期的协同治理。立法层面应明确AI开发者、数据提供方与平台运营者的连带责任,要求其对训练数据的合法性与代表性承担举证义务。可考虑在《网络安全法》《数据安全法》基础上出台专项指引,将"数据投毒"纳入网络攻击范畴,赋予网信、公安、市场监管等多部门联合执法权限。平台责任则需具体化:不仅限于事后删除违规内容,更应建立事前筛查机制,对高频提交相似地理标签内容的账号实施流量限制或人工复核。六问:公众应如何辨别真伪,守护信息安全?面对日益逼真的生成式虚假信息,公众的辨识能力与媒介素养成为至关重要的社会免疫细胞。提升公众"免疫力"不能停留在泛泛而谈的警示,而需提供具体可操作的方法论。一个核心原则是"主动验证,交叉比对"。面对一段极具冲击力的AI生成视频或文本,公众应养成习惯:核查信息源头是否来自权威机构或可验证的媒体;利用反向图片搜索工具验证图片或视频帧的真实性;将事件的关键信息(如地点、人物、时间)与多个独立、可信的信源进行交叉比对。警惕那些煽动极端情绪、声称掌握"独家绝密"或要求立即分享传播的内容。教育体系需将数字素养与批判性思维训练贯穿始终,从基础教育阶段就开始教授如何评估在线信息的可信度,理解AI生成内容的潜在特征与局限。社区、图书馆与媒体平台可以合作,普及辨识深度伪造与虚假信息的实用技巧。社会层面的广泛觉醒,能为虚假信息市场创造强大的"需求侧"阻力。"最关键的是要建立'AI不是百科全书'的认知。"专家建议,公众应将AI视为高效的信息整理工具,而非权威的事实裁判者,归根结底,技术素养的提升,是公众在AI时代保护自身权益的基础。此外,AI服务提供商未来还应提升AI的透明度与可解释性,向消费者披露输出信息的依据、信源及其权重,以帮助用户自主判断AI输出的可信度,并更好地保护消费者权益。

2026-03-18
证券之星
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (generative AI models used in search and recommendation) whose development and use have directly led to harm by spreading false information that misleads users and damages the information ecosystem, which constitutes harm to communities and violation of rights to accurate information. The article documents realized harms from AI-generated misinformation and the malicious manipulation of AI training data, not just potential risks. Therefore, this qualifies as an AI Incident. The detailed description of the malicious 'poisoning' of AI training data and the resulting misinformation dissemination fits the definition of an AI Incident because the AI system's malfunction or misuse directly leads to significant harm. The article also discusses governance and mitigation but the primary focus is on the harm caused by the AI system's manipulation and its consequences.
Thumbnail Image

谨防"AI投毒"混淆视听

2026-03-18
大洋网
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (large language models) whose outputs are deliberately manipulated through data poisoning (GEO) to produce biased, misleading, and commercially influenced answers. This manipulation leads to direct harm by misleading users into poor purchasing decisions, causing financial loss (harm to property) and undermining market fairness (harm to communities). The AI system's use and data input are central to the harm, fulfilling the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

AI惨遭灰色产业"投毒",全网炸锅

2026-03-18
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (large language models and generative AI) whose outputs are directly manipulated by a grey industry using AI-targeted poisoning techniques (GEO). The AI systems' use is central to the harm, as they are exploited to spread false information that users trust, leading to misinformation and deception. This constitutes harm to communities and consumers, fitting the definition of an AI Incident. The article details realized harm rather than just potential risk, and the AI system's role is pivotal in causing this harm. Hence, the classification as AI Incident is appropriate.
Thumbnail Image

AI投毒与换脸:立法如何追赶技术飞奔

2026-03-18
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems generating and disseminating false information that misleads consumers, causing economic harm and deception, as well as AI face-swapping technology infringing on personality rights and causing reputational damage. These are direct harms to individuals and communities caused by the use and misuse of AI systems. The discussion of legislative responses and challenges serves as complementary context but does not negate the fact that harms are already occurring. Therefore, the event is best classified as an AI Incident due to realized harms linked to AI system use.
Thumbnail Image

多管齐下,防范AI技术滥用(民生一线)

2026-03-16
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves the use and misuse of AI systems (AI-generated content, AI-generated fake accounts, AI-generated videos) that have directly led to harms including misinformation, fraud, violation of rights (e.g., impersonation, deception), and harm to communities by disrupting the online environment. The article reports on realized harms caused by AI misuse and the societal and regulatory responses to these harms. Therefore, this qualifies as an AI Incident because the AI systems' misuse has directly caused significant harm. The article also includes descriptions of governance and mitigation efforts, but the primary focus is on the harms caused by AI misuse, not just complementary information.
Thumbnail Image

聚焦网骗、网暴,今日头条去年封禁涉诈账号18万个_京报网

2026-03-18
news.bjd.com.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI technology used for content monitoring and governance, indicating AI system involvement. However, the focus is on the platform's response to existing problems such as fraud, network violence, and low-quality AI-generated content, including the removal of content and banning of accounts. There is no description of a specific AI system causing harm or malfunctioning, nor is there a credible risk of future harm described. The article mainly provides updates on governance practices, expert opinions, and platform actions to mitigate AI-related harms. This fits the definition of Complementary Information, which enhances understanding of AI ecosystem responses without reporting a new incident or hazard.