AI Data Poisoning via GEO Manipulates Recommendations and Misleads Consumers in China

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

In China, marketing firms exploit Generative Engine Optimization (GEO) to poison AI training data, causing large language models to recommend fictitious or low-quality products and services. This manipulation misleads consumers and distorts market information, with a paid industry emerging around influencing AI-generated answers and recommendations.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves AI systems explicitly, as GEO practices manipulate AI large language models and AI-generated search results. The use of AI-generated false or misleading content that appears in AI outputs and influences consumer decisions constitutes harm to communities and consumers, fulfilling the criteria for harm under the AI Incident definition. The article documents realized harms (misleading AI answers, false brand recommendations) caused by AI system manipulation. Although regulatory and self-regulatory responses are underway, the primary focus is on the existing harms caused by AI system misuse and data poisoning. Therefore, this is an AI Incident rather than a hazard or complementary information.[AI generated]
AI principles
Robustness & digital securityTransparency & explainability

Industries
Digital securityMedia, social platforms, and marketing

Affected stakeholders
Consumers

Harm types
Economic/PropertyPublic interest

Severity
AI incident

Business function:
Marketing and advertisement

AI system task:
Organisation/recommendersContent generation


Articles about this incident or hazard

Thumbnail Image

AI语料"投毒"产业链揭秘:"用模型对抗模型" 百亿市场该如何健康发展?

2026-03-14
东方财富网
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly, as GEO practices manipulate AI large language models and AI-generated search results. The use of AI-generated false or misleading content that appears in AI outputs and influences consumer decisions constitutes harm to communities and consumers, fulfilling the criteria for harm under the AI Incident definition. The article documents realized harms (misleading AI answers, false brand recommendations) caused by AI system manipulation. Although regulatory and self-regulatory responses are underway, the primary focus is on the existing harms caused by AI system misuse and data poisoning. Therefore, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

行业首个《生成式引擎优化(GEO)行业自律公约》在京签署, 16家单位共同发力AI信息生态治理_中华网

2026-03-14
m.tech.china.com
Why's our monitor labelling this an incident or hazard?
The article does not report any realized harm or incident caused by AI systems but rather details a governance initiative to regulate and improve the AI-generated information ecosystem. It outlines industry self-regulation efforts to prevent potential harms such as misinformation, manipulation, and loss of trust in AI outputs. Therefore, it fits the definition of Complementary Information as it provides context on societal and governance responses to AI-related challenges without describing a specific AI Incident or AI Hazard.
Thumbnail Image

玖叁鹿数字传媒:以GEO优化为核心,解锁企业AI时代精准获客新可能_天极网

2026-03-14
天极网
Why's our monitor labelling this an incident or hazard?
The article details the use of AI systems for marketing optimization and customer acquisition, which qualifies as AI system involvement. However, it does not describe any direct or indirect harm resulting from these AI systems, nor does it suggest any plausible future harm. The focus is on business innovation and service offerings rather than any incident or hazard. Therefore, the event is best classified as Complementary Information, providing context and updates on AI applications in marketing without reporting an AI Incident or AI Hazard.
Thumbnail Image

新浪AI热点小时报丨2026年03月14日14时_今日实时AI热点速递

2026-03-14
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The presence of AI systems is explicit, particularly large language models used for search and recommendation. The manipulation of AI training data and outputs by 'black hat GEO' marketing firms has directly led to misinformation and consumer deception, which is a harm to communities and consumers. This meets the criteria for an AI Incident. Other parts of the article describe industry developments and technological progress without direct or plausible harm, thus are complementary information. Since the article includes a clear AI Incident (misleading AI-generated medical recommendations), the overall classification is AI Incident.
Thumbnail Image

AI推荐"胡说八道",如何解

2026-03-14
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (large language models and AI search assistants) whose outputs are directly manipulated by GEO services through mass content generation and data poisoning. This manipulation has led to AI systems providing false or misleading recommendations, which can mislead consumers and cause harm to communities by spreading misinformation and deceptive commercial practices. Therefore, the AI system's use has indirectly led to harm as defined by misleading consumers and polluting AI outputs. The article describes realized harms (misleading AI recommendations) rather than just potential risks, qualifying it as an AI Incident. The discussion of regulatory and industry responses constitutes complementary information but does not negate the presence of an incident. Hence, the primary classification is AI Incident.
Thumbnail Image

AI语料投毒产业链揭秘②用模型对抗模型,百亿市场该如何健康发展?

2026-03-14
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves the use and manipulation of AI systems (large language models or generative AI) to produce misleading or false content that directly misleads consumers, constituting harm to communities and individuals. The article provides evidence of actual realized harm (misleading recommendations and fabricated rankings) caused by the AI system's manipulated outputs. This fits the definition of an AI Incident because the AI system's use has directly led to harm through misinformation and consumer deception. The presence of an active paid industry exploiting AI systems for such manipulation confirms the direct involvement and harm.
Thumbnail Image

AI语料"投毒"产业链揭秘②"用模型对抗模型",百亿市场该如何健康发展?

2026-03-14
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (large language models and generative AI) used in GEO to manipulate AI-generated content and recommendations, causing misinformation and misleading consumers, which constitutes harm to communities and breaches of advertising laws. The article documents realized harms from these manipulations, qualifying it as an AI Incident. The discussion of regulatory responses and industry self-regulation is complementary but does not overshadow the primary harm described. Hence, the classification is AI Incident.
Thumbnail Image

企业做GEO有什么用?AI搜索来了,不做就丢客户

2026-03-14
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The content centers on AI systems used for content generation, monitoring, and analysis to enhance enterprise visibility in AI search platforms. There is no indication of any injury, rights violation, disruption, or other harm caused or potentially caused by these AI systems. The article mainly provides complementary information about AI applications and their benefits for businesses, without describing any incident or hazard. Therefore, the event is best classified as Complementary Information.
Thumbnail Image

花100元就能被AI推荐?留个心眼永远不是坏事

2026-03-15
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (large language models) whose outputs are deliberately manipulated via GEO to produce misleading recommendations, directly causing harm to consumers by influencing their decisions with false or biased information. This constitutes a violation of advertising laws and harms consumer rights, fitting the definition of an AI Incident due to realized harm from AI misuse. The article also discusses regulatory responses, but the primary focus is on the harm caused by the AI system's manipulated outputs, not just complementary information or potential future harm.
Thumbnail Image

关于AI广告投放的报道

2026-03-15
爱范儿
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (large AI models) and their manipulation to produce biased outputs favoring certain products. This manipulation is a misuse of AI systems that can directly lead to harm to communities by spreading misleading or biased information, which fits the definition of an AI Incident. The harm is realized or ongoing as the service is actively offering to influence AI outputs for commercial gain, which can mislead users and distort market fairness.
Thumbnail Image

【AI】央視315晚會曝光AI大模型被「投毒」,相關業務已成產業鏈

2026-03-16
ET Net
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (large language models) and their use is manipulated through deliberate data poisoning by commercial actors. This manipulation leads to AI models providing misleading recommendations, including promoting fictitious products, which constitutes harm to communities through misinformation and deceptive commercial practices. Since the harm is occurring due to the AI system's outputs being manipulated, this qualifies as an AI Incident under the framework, specifically harm to communities and violation of trust in information integrity.
Thumbnail Image

中新网评:必须斩断给AI投毒之手

2026-03-15
China News
Why's our monitor labelling this an incident or hazard?
The event describes the deliberate feeding of false data into AI large models, which is a misuse of AI system development and use, causing direct harm by misleading users and enabling consumer fraud. The harm includes violation of trust, misinformation, and market disruption, which fall under harm to communities and consumers. The AI system's role is pivotal as the manipulated AI outputs are used to deceive and defraud. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

当AI搜索开始"说谎",谁来阻断GEO引发的"信息污染"?对话中国信通院AI研究所呼娜英 2026-03-16 13:31

2026-03-16
每日经济新闻
Why's our monitor labelling this an incident or hazard?
The article clearly identifies that GEO practices involve feeding misleading or excessive content to generative AI models, which directly affects the AI outputs users receive. This manipulation has already caused or is causing harms including misinformation, potential financial losses, and risks to personal safety, as well as broader societal harms like information pollution and erosion of trust in AI systems. These harms fall under violations of rights, harm to communities, and harm to property or individuals. The involvement of AI systems is explicit, and the harms are realized or ongoing, not merely potential. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information. The article also discusses governance and mitigation but the primary focus is on the harm caused by the AI system's misuse.
Thumbnail Image

21:21 315晚会曝光AI大模型被投毒 给AI投毒已成产业链

2026-03-15
每日经济新闻
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (AI large models) and their use is manipulated through deliberate data poisoning by third-party services. This manipulation causes AI models to produce misleading recommendations, which is a form of harm to communities and consumers. The harm is realized as AI models actively recommend fabricated products, misleading users. Therefore, this qualifies as an AI Incident due to the direct role of AI system manipulation causing harm.
Thumbnail Image

央视315曝光AI"投毒",我们和几位GEO从业者聊了聊-钛媒体官方网站

2026-03-16
tmtpost.com
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (large language models and generative AI assistants) and their manipulation via GEO techniques. However, it does not report any realized harm or incident resulting from this manipulation. The risks mentioned are potential and systemic, such as possible degradation of information quality or uncertain regulatory environments, which align with plausible future concerns but not immediate hazards. The main focus is on explaining the phenomenon, its commercial growth, and challenges, which fits the definition of Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

从业者讲述:被315曝光的GEO,如何精准"忽悠"AI?-钛媒体官方网站

2026-03-16
tmtpost.com
Why's our monitor labelling this an incident or hazard?
The event involves the use and misuse of AI systems (generative AI models used for search and answer generation) to manipulate outputs through mass feeding of false content (AI poisoning). This manipulation has directly led to harm by causing AI systems to present false, misleading, or biased information to users, which harms communities by spreading misinformation and distorting information access. The article documents a real case where a fictitious product was promoted as a top brand due to such manipulation, confirming realized harm. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

3·15揭露GEO"洗脑":AI大模型被投毒问题日趋严重

2026-03-16
东方财富网
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (large language models) whose outputs are manipulated through systematic poisoning of their training or input data by GEO service providers. This manipulation leads to AI recommending false or fabricated products, which misleads consumers and harms the information environment. The harm is realized and ongoing, as AI-generated recommendations are already promoting fictitious products. This fits the definition of an AI Incident because the AI system's use has directly led to harm to communities (misinformation and deception) and potentially breaches obligations to protect consumer rights. The article does not merely warn of potential harm but documents actual manipulation and its effects.
Thumbnail Image

AI语料"投毒"引行业深思 天娱数科吴邦毅解读GEO乱象与行业破局之路

2026-03-16
东方财富网
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly through AI marketing models and generative engines (GEO). The misuse of these AI systems for data poisoning and spreading false information has directly harmed consumer trust, model reliability, and fair commercial practices, which are harms to communities and violations of consumer rights. The article describes realized harms and the need for regulatory and technical responses, fitting the definition of an AI Incident. The company's response and compliance efforts are complementary information but do not negate the incident classification.
Thumbnail Image

3·15晚会丨AI大模型遭"投毒"? 给AI"洗脑"已成产业链

2026-03-15
东方财富网
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (AI large models) and their use is manipulated through the GEO service to inject false information into AI training data. This manipulation causes AI models to output misleading and fabricated product recommendations, which harms consumers by spreading misinformation and deceptive advertising. The harm is realized and directly linked to the AI system's outputs influenced by the malicious use of the GEO service. Hence, it meets the criteria for an AI Incident due to violations of consumer trust and harm to communities through misinformation.
Thumbnail Image

给AI大模型"投毒"成产业链?业内人士揭秘GEO套路

2026-03-15
东方财富网
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (large language models) whose training data and outputs are deliberately manipulated through the creation and dissemination of biased or fabricated content. This manipulation leads to AI models providing misleading recommendations, which can harm users by spreading false information and distorting trust in AI outputs. Since the AI system's use (training and response generation) directly leads to misinformation and potential harm to users and communities, this qualifies as an AI Incident under the framework. The harm is realized (misleading recommendations of non-existent products), and the AI system's role is pivotal in causing this harm.
Thumbnail Image

3月16日投资避雷针:315晚会曝光AI大模型被投毒 给AI投毒已成产业链

2026-03-16
东方财富网
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI large models being poisoned through a coordinated industry chain that manipulates AI outputs by injecting false or biased data. This is a direct misuse of AI systems that leads to harm by distorting AI-generated information and recommendations, which can mislead consumers and damage trust. The harm is realized as the poisoning is ongoing and has been exposed publicly. Hence, this qualifies as an AI Incident due to direct harm caused by the misuse of AI systems.
Thumbnail Image

谁在给AI"投毒"?315曝光GEO乱象 服务商透露收费3000元起、"一周见效"

2026-03-15
东方财富网
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (AI large language models) and their use is directly impacted by the deliberate injection of false or biased information via GEO services. This manipulation causes AI to generate misleading or false outputs, which harms users by providing inaccurate information and undermines the credibility of AI systems. The harm to communities through misinformation and deception is clearly articulated and ongoing, meeting the criteria for an AI Incident. The article details realized harm rather than just potential risk, and the AI system's role is pivotal in the dissemination of false information.
Thumbnail Image

向AI投毒已成产业链!315晚会曝光GEO技术:虚构产品都能成AI标准答案

2026-03-15
驱动之家
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (large AI models providing product recommendations) and the deliberate feeding of false data (AI poisoning) to manipulate their outputs. This manipulation has directly led to harm by causing AI to recommend fictitious products, misleading consumers and distorting market information. The harm is realized and ongoing, as the AI systems are actively providing false recommendations based on the poisoned data. Hence, it meets the criteria for an AI Incident due to the direct involvement of AI systems in causing harm through misinformation and commercial deception.
Thumbnail Image

315曝光AI大模型 "投毒"!涉事力思文化传媒去年参保人数仅1人

2026-03-15
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The report explicitly mentions an AI large model being 'poisoned,' which indicates a malfunction or malicious manipulation of the AI system. The exposure on a consumer rights program focusing on safety and rights violations suggests that the AI system's compromised state has led to harm or risk to consumers, fitting the definition of an AI Incident. The AI system is directly involved, and harm to consumer rights or safety is implied. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

315曝光AI大模型"投毒"黑产!39.9元篡改AI答案

2026-03-16
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (large language models) whose outputs have been manipulated by malicious actors using AI-driven content generation and dissemination tools (GEO). The AI systems' recommendations have directly led to harm by promoting false and potentially dangerous products, misleading consumers, and damaging trust. This constitutes a violation of consumer rights and causes harm to communities, fitting the definition of an AI Incident. The article details realized harm rather than just potential risk, and the AI system's role is pivotal in the harm caused.
Thumbnail Image

3·15晚会曝光AI投毒后:力擎GEO火速删文销号,宣称覆盖8大AI模型及12家媒体平台

2026-03-15
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The described 'GEO' system explicitly involves AI large language models and manipulates their outputs by poisoning their training or input data via coordinated content dissemination. This manipulation causes real harm: consumers receive misleading information that can lead to financial loss and health risks, violating their rights and harming communities. The event details the system's use and its harmful consequences, not just potential risks. Therefore, it qualifies as an AI Incident due to direct harm caused by the AI system's misuse.
Thumbnail Image

被315点名的万亿隐秘生意:"污染"DeepSeek

2026-03-15
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (DeepSeek and other AI dialogue models) whose outputs are deliberately manipulated by feeding them biased or optimized data (GEO) to influence AI-generated answers. This manipulation leads to harm by degrading the quality and trustworthiness of AI information, misleading users, and distorting fair competition, which constitutes harm to communities and potentially violates users' rights to accurate information. The article documents that this practice is ongoing and widespread, with real commercial impact and user influence, thus meeting the criteria for an AI Incident. The AI system's use is central to the harm, as the manipulated data directly affects AI outputs that users rely on.
Thumbnail Image

AI投毒揭蛊:营销的富矿,认知的毒药

2026-03-16
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (large language models providing answers), and the manipulation of their training or input data (feeding biased content) directly leads to harm by misleading users and distorting information, which harms communities and consumers. The harm is realized, not just potential, as the article reports ongoing commercial practices that influence AI outputs to favor paying clients, effectively poisoning AI responses. This fits the definition of an AI Incident because the AI system's use and misuse have directly led to harm (misinformation, erosion of trust, unfair commercial advantage).
Thumbnail Image

100元就能让三无机构登AI医美推荐榜 虚假信息泛滥隐患

2026-03-14
中华网科技公司
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (large language models) used for generating medical beauty recommendations. The harm arises from the AI system's outputs being manipulated by paid marketing (generative engine optimization) to promote fictitious medical institutions with fabricated credentials. This misinformation can directly harm users by misleading them into unsafe medical choices, constituting injury or harm to health (a). Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm through dissemination of false and potentially dangerous information.
Thumbnail Image

解读GEO乱象 : AI的回答是如何被操控的 揭秘AI投毒背后的真相

2026-03-16
中华网科技公司
Why's our monitor labelling this an incident or hazard?
The event involves the use and misuse of AI systems (generative AI models) to produce manipulated outputs that mislead users. The AI system's outputs are directly influenced by malicious input data (AI poisoning), causing the AI to generate false or biased answers. This leads to harm to communities and users by disseminating misinformation, which fits the definition of an AI Incident under harm category (d) - harm to communities. The article reports that this manipulation is actively occurring and demonstrated, not just a potential risk, so it is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

AI给你的答案可能是喂出来的 警惕GEO投毒

2026-03-16
中华网科技公司
Why's our monitor labelling this an incident or hazard?
The event involves the use and development of AI systems (large language models) that are being deliberately fed false data (GEO poisoning) to manipulate their outputs. This manipulation can directly lead to harm by spreading misinformation to users, which constitutes harm to communities and individuals relying on AI-generated information. Although the article does not describe a specific incident of harm already occurring, it clearly outlines a credible and plausible risk of harm from this AI misuse. Therefore, this situation qualifies as an AI Hazard because it plausibly could lead to an AI Incident involving misinformation and deception.
Thumbnail Image

GEO是个什么业务 AI搜索操控揭秘

2026-03-16
中华网科技公司
Why's our monitor labelling this an incident or hazard?
The GEO business uses AI systems to generate and distribute large volumes of fabricated content that manipulates AI search rankings, leading to misinformation and consumer deception. This constitutes direct harm to communities by polluting information ecosystems and misleading users. The AI system's role is pivotal in automating and sustaining this manipulation, fulfilling the criteria for an AI Incident. The event describes actual harm occurring, not just potential risk, and involves AI system use leading to violations of trust and consumer rights.
Thumbnail Image

解读GEO乱象: Al的回答是如何被操控的 虚构产品背后的信任危机

2026-03-16
中华网科技公司
Why's our monitor labelling this an incident or hazard?
The event explicitly involves generative AI systems providing fabricated product recommendations that do not exist, misleading millions of users. The AI system's use directly caused harm by spreading false information, undermining consumer trust and potentially causing economic or reputational damage to brands and users. This fits the definition of an AI Incident as the AI system's use has directly led to harm to communities (misinformation and trust crisis). The article describes a realized harm scenario, not just a potential risk, so it is not an AI Hazard or Complementary Information. It is not unrelated because AI systems are central to the event.
Thumbnail Image

业内人士揭秘GEO套路 AI"投毒"产业链浮现

2026-03-16
中华网科技公司
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (large AI models) and their outputs being manipulated through targeted content injection (GEO) to produce misleading recommendations, including for non-existent products. This manipulation leads to harm to communities and consumers by spreading false information and deceptive commercial content, fulfilling the criteria for harm to communities and violation of trust. The AI system's use and development are directly involved, as the AI models ingest and rank the manipulated content, resulting in harmful outputs. Therefore, this is an AI Incident rather than a hazard or complementary information, as the harm is occurring and the AI system's role is pivotal.
Thumbnail Image

315晚会曝光"AI投毒产业链" 揭秘GEO操控真相

2026-03-15
中华网科技公司
Why's our monitor labelling this an incident or hazard?
The GEO business explicitly uses AI large language models and manipulates their training or input data by injecting promotional content to influence AI outputs. This results in AI systems recommending fake or biased products, which is a direct harm to consumers and communities by spreading misinformation and undermining trust in AI. The event describes realized harm caused by the AI system's manipulated outputs, meeting the criteria for an AI Incident involving harm to communities and violation of informational integrity. Hence, it is classified as an AI Incident.
Thumbnail Image

"快的客户一天就上了":央视315曝光"AI投毒"黑产链后,淘宝GEO商家承诺"2-7天上排名"

2026-03-15
finance.china.com
Why's our monitor labelling this an incident or hazard?
The event involves the use and misuse of AI systems (large language models and AI assistants) whose training data and outputs are deliberately poisoned by a commercial black-market chain to produce false and misleading information. This manipulation has directly led to AI systems recommending fake products with fabricated features, misleading consumers and distorting information. The harm is realized and ongoing, including misinformation and deceptive commercial practices, which fall under harm to communities and violations of consumer rights. The article also mentions regulatory responses, but the primary focus is on the existing harm caused by the AI system's manipulated outputs. Hence, this is classified as an AI Incident.
Thumbnail Image

中國驚爆給AI「下毒」!GEO灌文宣誘導 虛構手錶竟獲大模型推薦 | 國際 | 三立新聞網 SETN.COM

2026-03-16
三立新聞
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (large language models and recommendation engines) whose outputs are manipulated by malicious actors using GEO tools to inject false content. This manipulation has caused the AI to recommend a fictitious product, misleading users and potentially causing financial and health harm. The harm is realized (misleading recommendations and misinformation) and linked directly to the AI system's use and data poisoning. Therefore, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

315晚会曝光AI大模型被投毒 AI大模型推荐位暗藏交易 - CNMO科技

2026-03-15
ai.cnmo.com
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI large models whose outputs are manipulated by malicious actors using GEO techniques to inject false information into the AI training data and recommendation outputs. This manipulation has caused the AI systems to recommend fake products, misleading users and causing harm to consumers and the broader community. This fits the definition of an AI Incident because the AI system's use and malfunction have directly led to harm (misinformation and consumer deception).
Thumbnail Image

AI大模型遭投毒,标准答案全是生意?

2026-03-16
opinion.dahe.cn
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (large language models) whose training data and outputs have been deliberately manipulated through data poisoning, a form of malicious use affecting the AI's development and use. The harm is realized as users receive biased, misleading answers that serve commercial interests rather than objective truth, which harms consumers and damages the credibility of AI systems. This fits the definition of an AI Incident because the AI system's malfunction (due to poisoned data) directly leads to harm to communities and violation of informational rights. The article describes ongoing harm rather than a potential future risk, so it is not merely a hazard or complementary information.
Thumbnail Image

AI大模型被"投毒"!今夜,3·15晚会刷屏!曝光荐股分成骗局、漂白鸡爪、外泌体......-证券之星

2026-03-15
wap.stockstar.com
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system (AI large language models) being manipulated via 'GEO' services that inject biased or false promotional content into the AI's data sources, causing the AI to recommend certain products falsely. This manipulation leads to misinformation and deceptive commercial practices that harm consumers and communities. The AI system's use is directly linked to realized harm, meeting the criteria for an AI Incident. The other topics in the article, while serious, do not involve AI systems and thus are not relevant to AI harm classification. The detailed description of the AI manipulation and its effects confirms this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

3·15晚会丨AI大模型遭"投毒"?给AI"洗脑"已成产业链

2026-03-15
金羊网
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (large AI models) whose outputs are manipulated by feeding them biased or false information through coordinated content generation and dissemination. This manipulation leads to AI systems recommending fabricated products, which constitutes harm to communities by spreading misinformation and deceptive commercial practices. Since the AI system's use has directly led to misleading recommendations and potential consumer deception, this qualifies as an AI Incident under the framework, specifically harm to communities and violation of trust in AI outputs.
Thumbnail Image

AI大模型遭"投毒"?给AI"洗脑"已成产业链 2026-03-15

2026-03-15
金羊网
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI large language models and their manipulation through deliberate data injection ('poisoning' or 'brainwashing'). The described practice is a use of AI systems that could plausibly lead to harms such as misinformation, unfair commercial influence, and potential violation of rights. However, the article does not report any realized harm or incident resulting from this manipulation, only the existence and promotion of the service. Thus, it fits the definition of an AI Hazard, as it plausibly could lead to an AI Incident but no direct or indirect harm has yet been documented.
Thumbnail Image

AI大模型遭"投毒"?给AI"洗脑"已成产业链

2026-03-15
大洋网
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (large language models) whose outputs are manipulated through systematic injection of false information via automated content generation and publication tools (GEO systems). This manipulation leads to AI models providing false, misleading product recommendations to users, which is a direct harm to consumers and communities. The AI system's role is pivotal as it is the medium through which the misinformation is delivered and trusted by users. Therefore, this qualifies as an AI Incident due to realized harm caused by the AI system's manipulated outputs.
Thumbnail Image

AI大模型遭"投毒"操控乱象已成产业链

2026-03-15
新浪财经
Why's our monitor labelling this an incident or hazard?
The event involves the use and manipulation of AI large language models by deliberately poisoning their training or input data with false information to distort their outputs. This manipulation directly leads to harm by misleading consumers, which constitutes harm to communities and individuals through misinformation. The AI system's outputs are being controlled to produce false recommendations, which is a direct consequence of the AI system's use and manipulation. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to significant harm through misinformation and consumer deception.
Thumbnail Image

谁在给AI"投毒"?315曝光GEO乱象,服务商透露收费3000元起、"一周见效"

2026-03-15
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (large AI models) whose outputs are manipulated by systematic content injection (GEO) to produce false and misleading information. The harm is realized as users receive and rely on false AI-generated product information, which harms the credibility of AI and misleads consumers, constituting harm to communities and a breach of trust. The manipulation is intentional and ongoing, directly linked to the AI system's use and output generation. Hence, this qualifies as an AI Incident due to the direct and indirect harm caused by AI misuse and manipulation.
Thumbnail Image

新浪AI热点小时报丨2026年03月16日03时_今日实时AI热点速递

2026-03-15
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly, specifically large AI models that generate information. The misuse involves deliberate manipulation of the data these AI systems use, causing them to produce false or biased outputs that mislead users. This manipulation has already occurred and is causing harm by spreading misinformation and distorting AI-generated content, which harms communities and the public's trust in AI. Hence, it meets the criteria for an AI Incident as the AI system's use has directly led to harm to communities through misinformation.
Thumbnail Image

警惕大模型被"投毒",AI要走出垃圾信息的迷障丨封面评论

2026-03-16
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (large language models) whose development and use are directly impacted by malicious data poisoning practices. This manipulation leads to the AI models producing false or misleading outputs, which harms users and communities by spreading misinformation and degrading the quality and trustworthiness of AI-generated information. The harm is realized and ongoing, as the AI models are currently recommending fictitious products as standard answers. Therefore, this qualifies as an AI Incident due to direct harm caused by the AI system's corrupted outputs.
Thumbnail Image

AI 时代的新「病毒」:你的模型可能正在被「喂毒」!

2026-03-15
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves AI systems being deliberately fed poisoned data (GEO poisoning) that causes them to produce harmful misinformation and false recommendations, which can lead to injury or harm to people (health risks from pseudoscience, financial harm from bad advice) and harm to communities through misinformation. The AI system's outputs are directly influenced by this poisoning, constituting a malfunction or misuse of the AI system's knowledge base. The harms are realized or ongoing, not just potential, as the article describes AI recommending fake products and pseudoscience. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

当AI被投毒,答案变成广告,我们该如何识别?

2026-03-15
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems being manipulated via poisoned training data and algorithmic exploitation to produce biased, paid advertising content disguised as objective answers. This manipulation directly harms users by misleading them, causing potential wrong decisions, and eroding trust in AI, which fits the definition of harm to communities and violation of rights (informational harm). The AI systems' outputs are central to the harm, and the issue is ongoing and systemic, not merely a potential risk. Hence, it qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

315晚会曝光AI大模型遭"投毒",给AI"洗脑"已成产业链

2026-03-16
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI large models and their training data being deliberately poisoned with false information, which directly leads to harms such as misinformation, consumer deception, and loss of trust in AI outputs. This manipulation affects the AI system's outputs and thus harms communities and users. The harm is realized and ongoing, not just potential. Therefore, this qualifies as an AI Incident because the AI system's use and development have directly led to significant harm to communities and users through misinformation and corrupted AI behavior.
Thumbnail Image

3·15晚会猛料不断:AI被投毒、鸡爪被漂白,这些坑就在你身边

2026-03-16
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly describes how AI systems are manipulated through 'poisoning' by feeding them false promotional content, which causes AI search and recommendation systems to present false or misleading information as standard answers. This manipulation directly harms consumers by misleading them, fulfilling the criteria for an AI Incident (harm to people through misinformation). The AI system's use and development are central to this harm. Other reported harms (food safety, medical scams, electric bike safety) do not involve AI and thus are not classified as AI Incidents. The AI-related harm is materialized, not just potential, so it is not an AI Hazard or Complementary Information. Hence, the event is classified as an AI Incident.
Thumbnail Image

微言 | 你信的 AI 推荐,可能是付费广告陷阱!

2026-03-15
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (generative AI used in generating recommendations and content) being used in a way that could mislead consumers, which is a violation of consumer rights and could lead to harm. Since the article focuses on the potential for harm through misleading AI recommendations and the need for regulation, rather than describing a specific incident where harm has already occurred, it fits the definition of an AI Hazard. The article also includes expert recommendations and regulatory context, but the main focus is on the plausible risk of harm from undisclosed paid AI recommendations rather than a completed incident or a response to one.
Thumbnail Image

你问AI得到的答案可能是广告

2026-03-16
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI large language models and their outputs being manipulated through coordinated input of promotional content, which is an AI system's use leading to harm. The harm includes dissemination of false or biased information, misleading consumers, and violating trust in AI-generated content, which constitutes harm to communities and potentially violates consumer rights. Since the AI system's outputs are directly influenced by this 'data poisoning' or 'washing' technique, causing real misinformation and biased recommendations, this qualifies as an AI Incident under the framework.
Thumbnail Image

3月16日投资避雷针:315晚会曝光AI大模型被投毒 给AI投毒已成产业链

2026-03-16
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI large models being deliberately poisoned through a commercialized data poisoning industry, which is a direct misuse of AI development and use leading to harm such as misinformation and unfair market manipulation. This fits the definition of an AI Incident because the AI system's use has directly led to harm to communities and possibly economic harm. Additionally, the mention of OpenClaw's security vulnerabilities leading to potential data theft and illegal control of financial transactions also constitutes a direct risk and harm to individuals' financial security, further supporting classification as an AI Incident. The harms are ongoing and realized, not merely potential, so this is not an AI Hazard or Complementary Information. The article is not general AI news or unrelated.
Thumbnail Image

315曝光AI被"投毒":AI会"挑食"才能活得久丨中听

2026-03-16
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (large language models) whose development and use are directly impacted by malicious data poisoning, leading to the AI generating and recommending false information. This misinformation harms users and communities by misleading them, fulfilling the harm criteria (harm to communities). The article reports that this is already happening, not just a potential risk, thus it is an AI Incident rather than a hazard. The discussion about mitigation and AI companies' responsibilities is complementary but secondary to the main event of AI systems being poisoned and causing misinformation.
Thumbnail Image

谁在给AI"投毒"?315曝光GEO乱象,服务商透露收费3000元起、"一周见效"

2026-03-16
新浪财经
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (AI large language models) and their use is manipulated through the development and deployment of large volumes of fabricated content designed to influence AI outputs. The harm is realized as AI systems provide false or biased information, misleading users and damaging the information ecosystem and trust. This fits the definition of an AI Incident because the AI system's use has directly led to harm to communities (misinformation) and breaches the integrity of information, which can be considered a violation of rights to truthful information. The article documents actual occurrences and demonstrations of this manipulation, not just potential risks, confirming it as an incident rather than a hazard or complementary information.
Thumbnail Image

AI大模型遭"投毒"?央视315曝光给AI"洗脑"已成产业链

2026-03-15
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI large language models and their outputs being manipulated through coordinated content injection ('data poisoning'). This manipulation causes AI systems to recommend fake products, which constitutes harm to communities through misinformation and deceptive commercial practices. The AI system's use is central to the harm, as the manipulated data directly influences AI recommendations. Therefore, this qualifies as an AI Incident due to realized harm caused by the AI system's use and manipulation.
Thumbnail Image

曝光!大量投喂抹黑软文给AI洗脑

2026-03-15
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (large AI models) being manipulated through the injection of false information to influence their outputs. This manipulation is intentional and commercial, resulting in AI systems recommending or prioritizing misleading or false content. Such actions can cause harm to consumers and communities by spreading misinformation and unfairly disadvantaging competitors, which constitutes harm to communities and a violation of fair commercial practices. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm through misinformation and manipulation.
Thumbnail Image

3·15晚会丨AI大模型遭"投毒"?给Al"洗脑"已成产业链

2026-03-15
新浪财经
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (AI large language models) whose outputs are manipulated through deliberate data poisoning (feeding biased or fabricated content) to influence recommendations. This manipulation has directly led to harm by causing AI models to recommend fictitious products to consumers, misleading them and potentially causing economic or trust harm. The involvement of AI systems in generating and disseminating false or biased information that affects consumers meets the criteria for an AI Incident, as it causes harm to communities and violates principles of truthful information dissemination.
Thumbnail Image

315晚会曝光AI大模型被投毒 给AI投毒已成产业链

2026-03-15
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The described practice involves the use and manipulation of AI systems (large AI models) by deliberately feeding them biased or false data to influence their outputs. This manipulation has directly led to AI models recommending false information (e.g., a fictitious smart band), which harms consumers by misleading them and distorts the information ecosystem, thus harming communities. The event clearly involves AI system use and misuse causing realized harm, fitting the definition of an AI Incident.
Thumbnail Image

3·15曝光AI推荐黑产:百元即可让三无产品登上AI答案榜首?

2026-03-15
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems (large language models like DeepSeek and others) being manipulated via 'generative engine optimization' (GEO) to promote false and unsafe products as standard answers. This manipulation results in misinformation that misleads consumers, causing harm to health and safety, and harms the information ecosystem's trustworthiness. The AI system's outputs are directly involved in causing these harms, fulfilling the criteria for an AI Incident. The event is not merely a potential risk but an ongoing harm as described.
Thumbnail Image

315晚会曝光AI大模型被投毒 力擎GEO软件背后公司有赞持股13.05%

2026-03-15
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly details how the GEO optimization system is used to manipulate AI large models by injecting biased promotional content, causing AI models to recommend false or misleading information. This manipulation directly leads to harm by spreading misinformation and distorting AI outputs, which affects users and communities relying on these AI systems. The AI system's use and its harmful effect are clearly described, meeting the criteria for an AI Incident. The involvement is through the use of the AI system to cause harm, not just a potential risk, and the harm is realized as AI models have already been influenced to recommend fabricated products.
Thumbnail Image

AI大模型被'GEO黑产链'操控后,具体是如何欺骗消费者的?

2026-03-15
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves the use and misuse of AI systems (large language models and AI assistants) whose outputs are manipulated through systematic data poisoning by the GEO black-market chain. This manipulation directly leads to consumer deception and potential harm to health and financial safety, which fits the definition of an AI Incident. The harm is realized and ongoing, as consumers rely on AI recommendations that have been corrupted to promote fake products. Therefore, this is an AI Incident due to the direct harm caused by the AI system's manipulated outputs.
Thumbnail Image

二维码里的AI助手,让说明书主动回答用户的问题-草料二维码

2026-03-15
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (an AI question-answering assistant integrated with QR codes) in development and use. There is no indication of any realized harm, malfunction, or violation of rights caused by the AI system. The article mainly provides information about the AI system's capabilities, benefits, and deployment status, which aligns with providing complementary information about AI developments and their ecosystem. Therefore, this event is best classified as Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

"三无"机构上医美榜,莫让AI成隐蔽的"帮凶"

2026-03-14
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI large language models as recommendation systems that have been deliberately fed false and paid content, causing them to recommend unqualified and fictitious medical beauty institutions. This has directly led to consumer misinformation and potential harm to their health and financial well-being. The AI system's role is pivotal as it acts as the medium through which false information is presented as trustworthy advice. The harm includes violation of consumer rights to accurate information and potential physical harm in a health-related domain. Hence, it meets the criteria for an AI Incident due to realized harm caused by the AI system's outputs.
Thumbnail Image

GEO数据投毒黑产链曝光 AI大模型防御战刻不容缓

2026-03-15
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (large language models) whose training data and inference outputs are being intentionally poisoned by malicious actors, causing harm to consumers through misinformation and manipulation. This constitutes a direct harm to communities and users relying on AI-generated information, fitting the definition of an AI Incident. The article also mentions responses and mitigations, but the primary focus is on the realized harm caused by the data poisoning attack on AI models.
Thumbnail Image

AI搜索时代,不做GEO优化,你的品牌就等着被淹没吧

2026-03-14
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The content centers on the evaluation and strategic use of AI-powered GEO tools to optimize brand presence in AI-generated search results. There is no indication of any realized harm, violation of rights, or disruption caused by these AI systems. Nor does it describe any credible risk of future harm stemming from these tools. The article is informational and analytical, providing complementary context about AI tools and their role in marketing strategies rather than reporting an incident or hazard. Therefore, it fits the definition of Complementary Information.
Thumbnail Image

AI语料"投毒"产业链揭秘①只花100元,虚构的保健品就被大模型"推荐"了

2026-03-14
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (large language models) whose outputs are manipulated by paid 'Generative Engine Optimization' services that inject fabricated content into the AI's training or retrieval data. This leads to AI systems recommending fictitious medical and health product providers as credible, which constitutes misinformation and deceptive advertising. The harm includes violation of consumer rights, potential health risks, and consumer fraud, all of which are harms to communities and individuals. The AI system's role is pivotal as it is the medium through which the false information is disseminated and trusted by users. Hence, this is an AI Incident, not merely a hazard or complementary information, because the harm is occurring and directly linked to the AI system's manipulated outputs.
Thumbnail Image

只要100块,"三无"品牌就能被AI"C位推荐"

2026-03-14
新浪财经
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (large language models) whose outputs are deliberately manipulated by paid services to present fabricated and misleading information as factual recommendations. This manipulation directly leads to harm by misleading consumers, violating their rights to truthful information, and potentially causing health and financial harm. The AI system's role is pivotal as it is the medium through which false information is disseminated and trusted by users. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information, as the harm is occurring and the AI system's misuse is central to the event.
Thumbnail Image

AI大模型被"投毒",有商家报价6600元就能包年,业内人士:GEO行业"鱼龙混杂",有商家利用漏洞"语料投毒

2026-03-15
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (large language models) being manipulated through data poisoning techniques (GEO) to produce biased or false outputs that promote certain brands unfairly. This manipulation constitutes a violation of advertising laws and users' rights to accurate information, thus causing harm to communities and violating legal obligations. The harm is realized as the AI outputs are influenced to present misleading information, which users receive unknowingly. Therefore, this qualifies as an AI Incident due to direct harm caused by the AI system's manipulated outputs.
Thumbnail Image

315晚会 | AI"投毒"产业链遭曝光,通过虚假信息和大量发稿可操控AI大模型

2026-03-15
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI large language models being manipulated through coordinated injection of false and promotional content (data poisoning) to influence AI outputs. This manipulation is a misuse of AI systems that directly harms consumers by spreading false information and misleading recommendations, constituting harm to communities and consumer rights. The event describes realized harm through the AI system's outputs being controlled to mislead users, meeting the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

付费上推荐榜 AI大模型"投毒"产业链曝光

2026-03-15
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI large models and their training and recommendation processes being manipulated by a coordinated industry chain that generates and disseminates false information. This manipulation causes AI systems to output false, misleading, and biased recommendations, which constitutes harm to communities and breaches of legal obligations related to advertising and intellectual property rights. The harm is realized and ongoing, not merely potential, as false information is already being presented as standard answers by AI. Therefore, this qualifies as an AI Incident due to direct and indirect harm caused by the AI system's use and malfunction (data poisoning).
Thumbnail Image

315晚会曝光,给AI"洗脑"已成产业链,影响的板块以及个股一览

2026-03-15
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (GEO technology for content generation and AI recommendation manipulation). It details harms such as misinformation dissemination, trust loss, and regulatory challenges linked to black-market and insecure AI applications. These harms affect communities and market operations, fitting the definition of AI Incident. The discussion of compliant providers and regulatory trends supports the assessment but does not override the presence of realized harms. Hence, the event is classified as an AI Incident.
Thumbnail Image

央視315晚會:曝光AI大模型被投毒 - 觀點網

2026-03-15
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves the use and manipulation of AI large language models by feeding them biased or fabricated content to influence their recommendations and outputs. This manipulation directly leads to harm by spreading false information and misleading consumers, which harms communities and violates trust in AI systems. The AI system's outputs are deliberately altered through malicious data input, constituting an AI Incident as the harm is realized and ongoing through the AI's recommendations.
Thumbnail Image

新浪AI热点小时报丨2026年03月16日00时_今日实时AI热点速递

2026-03-15
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article includes multiple AI-related topics, but none describe a direct or indirect AI Incident where harm has occurred due to AI system development, use, or malfunction. The mention of GEO technology potentially manipulating AI outputs suggests a risk but does not confirm actual harm, so it does not meet the threshold for an AI Incident or AI Hazard. The exposure of scams and illegal products is a regulatory and societal response, fitting the definition of Complementary Information as it provides updates and context rather than reporting a new AI Incident or Hazard. Therefore, the article is best classified as Complementary Information.
Thumbnail Image

曝光AI投毒产业链,力擎GEO可操控八款大模型,去年国内GEO市场规模29亿元

2026-03-15
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as manipulating multiple AI large language models by generating and publishing false promotional content to influence AI outputs and user perceptions. This use of AI directly leads to harm by spreading misinformation, polluting AI data, and potentially infringing on consumer rights and privacy. The harm is realized, not just potential, as the system actively generated and disseminated misleading content that was then recommended by AI models. The exposure and subsequent removal of the system's content confirm the incident's materialization. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

AI大模型竟能被"投毒"操控?315晚会揭开"GEO"黑灰产业链

2026-03-15
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (large AI language models providing search results and recommendations). The use of the GEO service to generate and publish false information is a misuse of AI development and deployment, directly causing the AI models to output false and misleading information. This results in harm to communities through misinformation and deception, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, as the AI models are actively recommending non-existent products based on fabricated data. Therefore, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

消费者为AI虚假搜索结果买单?商家批量投喂不实内容,每月最低只需1200元

2026-03-15
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly, as it concerns AI large language models and generative AI platforms that provide search and recommendation answers to users. The harm arises from the use and misuse of AI systems: commercial actors deliberately 'poison' the data sources that AI models rely on by injecting false content, which the AI then uses to generate misleading recommendations. This has directly led to harm to consumers (misinformation causing potential financial or health harm) and breaches of legal rights (false advertising and unfair competition). The article also discusses the legal risks and regulatory responses, confirming the recognized harm. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

GEO数据投毒黑产链曝光 AI大模型防御战刻不容缓

2026-03-15
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (large language models) whose data sources are deliberately poisoned with false information (GEO), causing the AI to produce misleading outputs that harm consumers and brands, and undermine trust in AI platforms. The harm is realized and ongoing, as evidenced by the description of consumer misinformation and market distortion. The article also details responses to this harm, but the primary focus is on the harm caused by the AI system's manipulated data and outputs. This fits the definition of an AI Incident, as the AI system's use and data poisoning have directly led to harm to communities and economic actors.
Thumbnail Image

AI大模型被"投毒",央视"3·15"晚会曝光全流程

2026-03-15
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (large language models) whose training data and recommendation outputs are deliberately manipulated by malicious actors using the GEO technology. This manipulation leads to the AI models producing false and misleading content as standard answers, which constitutes harm to communities by spreading misinformation and undermining the reliability of AI outputs. The harm is realized and ongoing, as the false information is actively influencing AI recommendations. Therefore, this qualifies as an AI Incident due to the direct link between the AI system's use and the harm caused by misinformation and manipulation.
Thumbnail Image

3·15曝光AI"投毒"业务后,"力擎GEO"相关宣传帖已无法查看

2026-03-15
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (large AI models) whose outputs are being manipulated through generated content to influence recommendations and answers. This manipulation can lead to harm to communities by spreading biased or misleading information, distorting consumer choices, and undermining trust in AI systems. Since the manipulation is actively occurring and has been exposed, the harm is realized rather than potential. Therefore, this qualifies as an AI Incident due to violations of informational integrity and harm to communities through AI misuse.
Thumbnail Image

荐股亏损玩消失、万能神药是三无产品、给AI大模型投毒....."3·15"晚会曝光了这些

2026-03-15
新浪财经
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the use of AI large models and the manipulation of their outputs through GEO services that inject promotional and potentially false information into AI training or input data. This manipulation directly harms consumers by misleading them via AI-generated recommendations or search results, which fits the definition of an AI Incident due to harm to communities through misinformation. The other reported issues do not involve AI systems or AI-related harm and are thus classified as unrelated. The AI-related harm is realized and ongoing, not merely potential, so it is an AI Incident rather than an AI Hazard or Complementary Information.
Thumbnail Image

3·15晚会曝光AI大模型被"投毒",给AI"洗脑"已成产业链

2026-03-15
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (large language models) being intentionally manipulated through data poisoning to produce biased and misleading outputs favoring paying clients' products. This manipulation leads to harm by spreading false or misleading information to consumers, which can be considered harm to communities and consumers. The AI system's use is central to the harm, as the poisoned data directly affects AI recommendations. Therefore, this qualifies as an AI Incident due to realized harm caused by the AI system's manipulated outputs.
Thumbnail Image

3·15晚会曝光:GEO技术"投毒"AI大模型

2026-03-15
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (large language models) whose outputs are directly influenced by manipulated input data created through 'GEO optimization' services. This manipulation causes the AI to produce biased or misleading recommendations, which constitutes harm to communities by spreading misinformation and undermining the reliability of AI outputs. Since the AI system's use has directly led to this harm, this qualifies as an AI Incident.
Thumbnail Image

已成产业链,3·15曝光AI大模型被"投毒"

2026-03-15
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI large models and describes how a service intentionally feeds biased promotional content to these models to manipulate their outputs. This manipulation results in AI systems recommending fictitious products, which is a form of misinformation causing harm to consumers and communities. The harm is realized, not just potential, as the AI models are actively recommending false information. This fits the definition of an AI Incident because the AI system's use and misuse have directly led to harm (harm to communities through misinformation and deceptive commercial practices).
Thumbnail Image

央视3·15曝光AI大模型"投毒"乱象,有电商店铺火速下架相关产品

2026-03-15
t.cj.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (large AI models used for recommendations and search) and a software tool designed to manipulate these AI outputs. The manipulation has directly led to the dissemination of fabricated product information, which constitutes harm to communities and consumers by spreading misinformation and potentially causing economic or reputational damage. The rapid removal of these services after exposure indicates recognition of the harm. Therefore, this qualifies as an AI Incident because the AI system's use and misuse have directly led to significant harm through misinformation and manipulation of AI recommendations.
Thumbnail Image

AI 答案都有得買!央視 3‧15 晚會揭發:廠商肯「課金」就得?

2026-03-16
ezone.hk 即時科技生活
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (large language models and generative AI search engines) and their use is manipulated through deliberate data poisoning to distort outputs. This manipulation directly leads to harm by spreading misinformation and biased content, which affects users' ability to access truthful information, thus harming communities and violating informational rights. The harm is realized and ongoing, not merely potential, making this an AI Incident rather than a hazard or complementary information.
Thumbnail Image

AI被虚构内容洗脑

2026-03-16
t.cj.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The AI system (a large language model) was deliberately fed fabricated content, which it then used to generate outputs that presented false information as factual, misleading users about a non-existent product. This constitutes harm to communities by spreading misinformation. The AI system's use directly led to this harm, fulfilling the criteria for an AI Incident rather than a hazard or complementary information. The event involves realized harm, not just potential harm, as the AI model actively propagated the false information to users.
Thumbnail Image

央视3·15曝光AI大模型"投毒"乱象,有电商店铺火速下架相关产品

2026-03-16
qlwb.com.cn
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI large models and their manipulation through a software tool designed to influence AI recommendations and search rankings. The manipulation leads to the AI system recommending fabricated content, which is a direct harm to communities and users relying on AI-generated recommendations. The presence of a gray market for such manipulation and the quick removal of products after exposure further confirm the realized harm. Therefore, this is an AI Incident due to the direct harm caused by the AI system's misuse and manipulation.
Thumbnail Image

315晚会曝光GEO投毒后,我们测试了豆包千问元宝和DeepSeek_手机网易网

2026-03-15
m.163.com
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI large language models being manipulated through the development and use of GEO techniques to produce false or misleading outputs recommending fictitious products. This manipulation has directly led to the harm of spreading misinformation to users, which harms communities by undermining trust and providing false information. The AI systems' outputs are directly influenced by the malicious use of GEO, fulfilling the criteria for an AI Incident. The event is not merely a potential risk or a complementary update but a realized harm caused by AI misuse.
Thumbnail Image

被315点名的万亿隐秘生意:"污染"DeepSeek - cnBeta.COM 移动版

2026-03-15
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (DeepSeek and AI chatbots) and describes how the use of AI-generated content to manipulate AI search results leads to misleading or biased information being presented to users. This manipulation is a direct use of AI systems to cause harm by polluting AI outputs, which affects users' access to truthful information and can cause economic and social harm. The article documents that this is an ongoing and realized phenomenon, not just a potential risk, thus meeting the criteria for an AI Incident rather than a hazard or complementary information. The harm is clearly articulated as misinformation and economic distortion caused by AI system manipulation.
Thumbnail Image

315曝光AI大模型"投毒":"力擎GEO优化系统"被点名 关联公司员工数仅1人 - cnBeta.COM 移动版

2026-03-15
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system ('力擎GEO优化系统') used to create and disseminate false information that was then incorporated into AI large models, leading to misinformation being recommended by these models. This is a clear case where the use of an AI system has directly led to harm in the form of misinformation dissemination, which harms communities and the information ecosystem. Therefore, it qualifies as an AI Incident under the framework, as the AI system's use has directly led to harm through poisoning AI models with false data.
Thumbnail Image

315晚會打假/AI大模型遭「投毒」 付費洗腦成灰色產業鏈 | 聯合新聞網

2026-03-16
UDN
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (large AI models) whose outputs are manipulated by poisoning their external data sources with false information. This manipulation leads to AI-generated recommendations that mislead consumers, constituting harm to communities and consumers (a form of harm to people and consumer rights). The harm is realized as consumers receive false product recommendations, which is a direct consequence of the AI system's use of poisoned data. Therefore, this qualifies as an AI Incident due to the realized harm caused by the AI system's manipulated outputs.
Thumbnail Image

廣告變AI引用內容 中國官媒批GEO軟體投餵假資料 | 兩岸 | 中央社 CNA

2026-03-16
Central News Agency
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (large language models) whose outputs are manipulated by feeding them false data via GEO technology. This manipulation has directly led to harm by misleading consumers with false product recommendations, which is a violation of laws protecting consumer rights and advertising standards. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm to communities (misinformation and deceptive advertising) and violations of legal rights.
Thumbnail Image

早在2025年9月,新京报贝壳财经就曾调查过GEO的"运作套路",是国内最早报道GEO的媒体之一。GEO服务主要包括"卖软件"和"全套服务"两类,后者价格在每季度3600... [全文]

2026-03-16
bjnews.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (large language models) being manipulated through GEO services that generate and feed biased promotional content into them. This manipulation leads to the AI systems producing outputs that mislead users, constituting harm to communities and violation of users' rights to truthful information and informed choice. The harm is realized as the AI-generated content is already being used and disseminated, not merely a potential risk. The article also discusses the operational details and pricing of these services, confirming active use and impact. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

AI企业纷纷发声明,坚决与违规黑灰产划清界限

2026-03-16
21jingji.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (AI large language models) being manipulated via GEO technology to generate and propagate false information, which has directly led to harm by causing AI models to recommend fake products and spread misinformation. This harms communities by misleading users and damaging trust in AI applications. The exposure of these practices and the companies' responses are part of the incident context, but the core issue is the realized harm caused by AI misuse. Therefore, this qualifies as an AI Incident due to the direct link between AI system misuse and harm to communities through misinformation and manipulation.
Thumbnail Image

廣告變AI引用內容 中國官媒批GEO軟體投餵假資料 | 大陸政經 | 兩岸 | 經濟日報

2026-03-16
Udnemoney聯合理財網
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI large models being manipulated through false data input (GEO technology) to generate misleading content and product recommendations. This manipulation has directly led to harm by deceiving consumers and violating legal protections, fulfilling the criteria for an AI Incident. The AI system's outputs are pivotal in causing the harm, as they propagate fabricated information that influences consumer behavior and market fairness. The legal violations and consumer deception confirm the harm is realized, not just potential, distinguishing this from a hazard or complementary information.
Thumbnail Image

你问AI的答案,可能是给AI投的"毒"!该如何承担法律责任?

2026-03-16
扬子网(扬子晚报)
Why's our monitor labelling this an incident or hazard?
The event involves the use and misuse of AI systems (large language models) whose outputs are manipulated by deliberately feeding them false data, resulting in AI-generated recommendations that mislead consumers and distort market competition. This constitutes indirect harm to consumers (health and financial harm), harm to market order (harm to communities and economic harm), and potential broader societal risks. The article also addresses the legal implications and responsibilities arising from this misuse. Therefore, this qualifies as an AI Incident because the AI system's use and manipulation have directly and indirectly led to significant harms as defined in the framework.
Thumbnail Image

2026年GEO服务商选型与案例深度解析:从流量可见到持续转化的实战指南_天极网

2026-03-16
天极网
Why's our monitor labelling this an incident or hazard?
The article describes AI systems (GEO platforms) used in marketing and brand optimization, which qualifies as AI system involvement. However, it does not report any direct or indirect harm resulting from these AI systems, nor does it suggest plausible future harm or risks. The content is a market and technology overview, including company capabilities and client testimonials, without any indication of incidents, hazards, or governance responses. Therefore, it fits the category of Complementary Information as it provides context and understanding of AI ecosystem developments without reporting new harm or risk.
Thumbnail Image

"315"曝光向大模型"投毒"!我们还能否继续相信AI?

2026-03-16
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves the use and development of AI systems (large generative models) whose training data and output have been deliberately manipulated by malicious actors through systematic data poisoning. This manipulation has directly led to harm by misleading the public, corrupting AI outputs, and undermining trust in AI as a reliable information source, which affects societal decision-making and public information integrity. The article describes realized harm rather than potential harm, and the AI system's role is pivotal in the dissemination of false or biased information. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

AI大模型遭"投毒",标准答案全是生意?

2026-03-16
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (large language models) whose outputs are manipulated by deliberate data poisoning, a misuse of the AI system's development and training data. This manipulation directly leads to harm by misleading consumers with biased, paid content disguised as objective answers, undermining trust and causing informational harm to communities. The article details realized harm rather than just potential risk, fulfilling the criteria for an AI Incident. It is not merely a hazard or complementary information because the harm is ongoing and systemic, nor is it unrelated as the AI system is central to the issue.
Thumbnail Image

对GEO"投毒"必须零容忍

2026-03-17
中国经济网
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (large language models and GEO technology) being used maliciously to create and spread false information, which directly harms consumers and the digital environment. This constitutes a violation of rights and harm to communities, fitting the definition of an AI Incident. The article describes realized harm from the AI system's misuse rather than potential harm, so it is not merely a hazard or complementary information. Therefore, the classification is AI Incident.
Thumbnail Image

315晚会曝光AI"投毒"灰产链,暴露大模型背后算法高危漏洞

2026-03-16
21jingji.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (large language models) whose outputs are manipulated by malicious actors feeding false information (AI 'poisoning') through automated content generation and publication (GEO). This manipulation leads to AI models providing false, fabricated product information and recommendations, misleading users and harming their right to accurate information and consumer protection. The harm is realized and ongoing, as users receive and rely on false AI-generated content. The event also reveals algorithmic vulnerabilities that facilitate this harm. The involvement of AI in both the use and malfunction (due to algorithmic design flaws) directly leads to harm to communities and consumers. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

给AI投毒搅乱广告业!中国广告博物馆馆长黄升民:建内容银行,拒绝被GEO绑架

2026-03-17
每日经济新闻
Why's our monitor labelling this an incident or hazard?
The event involves the use and misuse of AI systems (GEO techniques feeding false data into AI training corpora) that directly lead to harm: consumer deception and erosion of trust in AI-generated advertising content, which constitutes harm to communities and violation of rights. The AI system's outputs are manipulated to produce false recommendations, causing real-world consumer harm. Therefore, this qualifies as an AI Incident. The article also discusses responses and solutions, but the primary focus is on the realized harm caused by AI misuse.
Thumbnail Image

给AI投毒成黑产,传统搜索可以打一场"信任重构"战-钛媒体官方网站

2026-03-17
tmtpost.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (large language models) that are manipulated through a black-market service to ingest and propagate false information, leading to misinformation and trust erosion among users. The harm is direct and realized, as users are misled by AI-generated content based on poisoned data. This fits the definition of an AI Incident because the AI system's use and malfunction (due to poisoned training or input data) directly lead to harm to communities and consumer rights. The article also discusses the broader ecosystem and responses but the core event is the realized harm from AI manipulation.
Thumbnail Image

大模型不需要真相:揭秘GEO产业链的"认知入侵"逻辑-钛媒体官方网站

2026-03-17
tmtpost.com
Why's our monitor labelling this an incident or hazard?
The event involves the use and malfunction of AI systems (large language models with retrieval-augmented generation) that have been deliberately manipulated by malicious actors injecting false information into their data sources. This manipulation has directly led to the AI providing false product recommendations, misleading consumers and causing harm to their financial interests and trust. The harm is realized and ongoing, not merely potential. Therefore, this qualifies as an AI Incident because the AI system's outputs have directly caused harm through misinformation and deception, impacting users and the broader information ecosystem.
Thumbnail Image

两江漫评|AI也被"投毒"?警惕算法背后的营销黑手

2026-03-16
华龙网
Why's our monitor labelling this an incident or hazard?
The event involves the use and misuse of AI systems (large language models) whose outputs are manipulated by malicious actors injecting false data into the training or reference data sources. This manipulation leads to actual harm: consumers are misled into buying products based on false AI-generated recommendations, violating their rights and causing economic harm. The article explicitly states these harms have occurred and discusses the mechanisms and consequences. Therefore, this qualifies as an AI Incident because the AI system's use and the malicious manipulation of its data directly lead to harm to consumers and market fairness.
Thumbnail Image

依法斩断AI"投毒"产业链

2026-03-16
法制日报
Why's our monitor labelling this an incident or hazard?
The event involves the use and misuse of AI systems (large language models) whose outputs are manipulated through a process called AI 'poisoning' or '投毒'. This manipulation directly leads to harm by spreading false information, violating consumer rights, and damaging market order, which fits the definition of an AI Incident due to violations of human rights and harm to communities. The article describes realized harm caused by the AI system's outputs being manipulated, not just potential harm. Therefore, this qualifies as an AI Incident.
Thumbnail Image

GEO被曝"投毒" 券商与基金集体困惑:还能不能做了?界限如何判定?

2026-03-16
东方财富网
Why's our monitor labelling this an incident or hazard?
The event involves the use and misuse of an AI system (GEO) that generates and manipulates content, including the creation and dissemination of false information, which constitutes a violation of legal and consumer rights. The exposure of 'poisoning' GEO practices indicates that harm has occurred or is occurring, such as misleading consumers and unfair competition, which are violations of rights and harm to communities. Therefore, this qualifies as an AI Incident because the AI system's use has directly or indirectly led to harm through false advertising, misinformation, and unfair market practices. The article also discusses legal and regulatory responses, but the primary focus is on the incident of harm caused by the AI system's misuse.
Thumbnail Image

被315点名的万亿隐秘生意:"污染"DeepSeek

2026-03-16
驱动之家
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (DeepSeek and other AI dialogue models) and their outputs being manipulated through GEO techniques, which is a form of AI misuse. The harm is realized as users receive biased or manipulated AI-generated answers, which can mislead consumers and distort information, constituting harm to communities and users. The article details ongoing practices and market growth of this manipulation, indicating the harm is occurring, not just potential. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

给AI投毒成黑产,传统搜索可以打一场"信任重构"战

2026-03-17
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (large language models) being deliberately fed false information through a paid service to manipulate AI outputs, causing the AI to generate misleading or false answers. This directly harms users by spreading misinformation and undermining trust, which fits the definition of harm to communities and violation of rights. The article details how this black market activity is ongoing and has caused real harm, not just a potential risk. Hence, it is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

事关GEO乱象,多家公司回应

2026-03-16
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI large models and GEO optimization AI systems being manipulated ("AI data poisoning"), which is an AI system involvement. The alleged misuse (data poisoning, false information generation) could cause harm to communities and market order, fitting the definition of an AI Incident if it were occurring. However, the article mainly reports on companies' official statements denying involvement and condemning such practices, without presenting new evidence of realized harm or a new incident. Therefore, the article serves as Complementary Information, updating on societal and industry responses to a previously reported AI Incident rather than reporting a new incident or hazard itself.
Thumbnail Image

3・15刚落幕,GEO服务还在电商平台叫卖!298元起就能"操控"大模型搜索结果?

2026-03-16
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The event involves the use and misuse of AI systems (large language models) whose search results and generated answers are deliberately manipulated through GEO services that 'poison' the AI by feeding it biased, paid content. This leads to violations of consumer trust and causes harm to communities by spreading misleading information, which fits the definition of harm to communities and other significant harms. The article explicitly states that consumers are misled into making poor purchasing decisions based on manipulated AI outputs, which is a direct harm caused by the AI system's misuse. Hence, it is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

AI"投毒"产业链,愈演愈烈,影响巨大

2026-03-16
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The event involves the use and development of AI systems (large language models and recommendation engines) whose training data and output are deliberately manipulated by malicious actors to promote false or misleading content. This manipulation has directly led to harm to consumers (misleading health product recommendations) and harm to communities (loss of trust in AI systems and potential market manipulation). Therefore, this qualifies as an AI Incident because the AI system's use and data poisoning have directly caused harm. The article also mentions responses and regulatory efforts, but the primary focus is on the realized harm from AI data poisoning.
Thumbnail Image

315曝光AI投毒,一门从莆田做到硅谷的生意

2026-03-16
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (mainstream AI large language models and recommendation systems) that are manipulated through the deliberate injection of false promotional content online (AI poisoning). This manipulation causes AI to recommend non-existent or misleading products, directly misleading consumers and causing harm. The harm is realized (not just potential) as users receive false recommendations from AI, which can lead to financial loss or other negative consequences. The article explicitly describes the use and misuse of AI systems leading to this harm, fulfilling the criteria for an AI Incident.
Thumbnail Image

千元就能控AI回答!3·15曝光"AI投毒"后GEO换马甲生意照做

2026-03-16
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (AI large language models using RAG technology) and their outputs being manipulated by malicious actors who generate and publish false content to influence AI-generated answers. This manipulation directly leads to harm by spreading misinformation and undermining user trust, which is a significant harm to communities and the AI ecosystem. The article documents that this practice is ongoing despite exposure and attempts to block it, confirming realized harm rather than just potential risk. The involvement is through malicious use and data poisoning of AI training and retrieval data, fitting the definition of an AI Incident. The article also discusses governance and technical responses, but the primary focus is on the harm caused by the AI system's manipulated outputs.
Thumbnail Image

一图读懂315曝光的GEO骗局:那个"不存在的手环"是怎么骗过AI的?-新闻频道-和讯网

2026-03-16
和讯网
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (AI assistants) that generate recommendations based on internet content. The malicious use of AI-generated or AI-targeted content (GEO) led to the AI recommending a non-existent product, which is misinformation causing harm to users and potentially violating consumer rights. The AI system's outputs directly caused harm by misleading users, fulfilling the criteria for an AI Incident. The article does not merely warn of potential harm but documents an actual case of AI systems being deceived and producing harmful outputs.
Thumbnail Image

2026-03-16
证券之星
Why's our monitor labelling this an incident or hazard?
An AI system (AI assistants) was directly involved in recommending a non-existent product due to being fed manipulated, false information created and distributed via GEO. This caused harm to users by misleading them with false product recommendations, which can be considered harm to communities and consumers. The AI system's use and malfunction (being 'poisoned' by false data) directly led to this harm. Therefore, this event qualifies as an AI Incident because the AI system's outputs caused realized harm through misinformation and deception of users.
Thumbnail Image

大模型被"投毒"之后,AI时代需要寻找责任的"锚点"

2026-03-16
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (large language models) whose outputs are manipulated by poisoned data inputs, causing them to generate false information that misleads users. This manipulation is deliberate and systematic, resulting in harm to communities and consumers by spreading misinformation and false advertising. The harm is realized and ongoing, not merely potential. The article also highlights the challenges in legal responsibility but confirms the existence of harm caused by AI outputs influenced by malicious data poisoning. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

315曝光GEO之后,模型公司对"AI投毒"能否置身事外?

2026-03-16
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (large language models and AI recommendation systems) whose outputs have been manipulated by 'AI poisoning' through polluted data sources. This manipulation has directly led to harm by deceiving consumers with false product information and distorted AI recommendations, which harms communities and consumer trust. The article describes ongoing harm rather than just potential risk, and discusses the role of AI model companies in addressing the issue. Hence, it meets the criteria for an AI Incident due to realized harm caused by the use and misuse of AI systems.
Thumbnail Image

千元就能控AI回答!3·15曝光"AI 投毒"后 GEO换马甲生意照做

2026-03-16
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (AI large models like ChatGPT and others) and their use is manipulated through malicious data poisoning (AI '投毒') to generate false, misleading answers. The harm is realized as users receive false information presented as authoritative, which harms communities by spreading misinformation and undermining trust in AI systems. The article details the ongoing sale and use of software and services to perform this manipulation, confirming active harm rather than just potential risk. Hence, it meets the criteria for an AI Incident due to direct harm caused by AI misuse and malfunction (polluted data leading to false outputs).
Thumbnail Image

看不见的 "数据投毒",AI 正在慢慢变 "笨"? | 封面评论

2026-03-16
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves the use and misuse of AI systems (large language models) through deliberate data poisoning, which directly leads to harm by corrupting AI outputs and misleading users. This constitutes a violation of trust and potentially harms communities by spreading manipulated information. Since the harm is occurring due to the AI system's misuse, this qualifies as an AI Incident.
Thumbnail Image

AI被"投毒"监管需跟进

2026-03-17
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves the development and use of AI systems (large language models) whose training data is deliberately manipulated through GEO techniques to produce biased or false outputs. This manipulation could plausibly lead to harms such as misinformation, erosion of trust, and harm to communities through digital pollution. Since the article focuses on the potential and ongoing risk of data poisoning rather than a specific realized harm event, it fits the definition of an AI Hazard. The article also discusses governance and mitigation strategies, but the primary focus is on the risk posed by data poisoning, not on a resolved incident or complementary information about responses.
Thumbnail Image

给AI投毒搅乱广告业!中国广告博物馆馆长黄升民:建内容银行,拒绝被GEO绑架

2026-03-17
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (generative AI models used for recommendations) whose training data is intentionally polluted with false information (AI poisoning) by malicious actors using GEO techniques. This leads to AI-generated advertising content that misleads consumers, causing harm to their decision-making and trust, which fits the definition of harm to communities and violation of rights. The article describes realized harm (consumer deception and industry trust erosion) directly linked to AI misuse, not just potential harm. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

马映冰 | 消费者在向AI让渡选择权,定位理论面临生死考验

2026-03-17
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article primarily provides a conceptual and analytical discussion about AI's impact on marketing and consumer behavior, the risks of misinformation through AI-generated content, and the need for ethical regulation. It does not report a concrete incident of harm caused by AI nor a specific plausible future harm event. Therefore, it fits the category of Complementary Information as it offers context and insight into AI-related developments and societal implications without describing a new AI Incident or AI Hazard.
Thumbnail Image

扼杀针对AI的"投毒"势在必行

2026-03-16
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves the use and potential misuse of AI systems (large language models) through deliberate data poisoning to manipulate outputs. Although the article does not describe a concrete realized harm, it clearly outlines plausible future harms that could arise from this practice, such as misinformation causing health risks and social instability. Therefore, this qualifies as an AI Hazard because it plausibly could lead to an AI Incident if unaddressed. The article also emphasizes the need for governance and technical responses, but the main focus is on the risk and potential harm rather than a response to a past incident.
Thumbnail Image

别顾着吃315的瓜!大厂打响「数据保卫战」:三张底牌破解AI投毒

2026-03-16
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (large language models) and describes how malicious use of AI-generated content (GEO technology) has directly led to harm by misleading users with false information, fulfilling the criteria for an AI Incident. The article details a concrete example where AI models recommended a fictitious product due to poisoned data, demonstrating realized harm to users and the AI ecosystem. Although it also discusses mitigation efforts and industry responses, the primary narrative centers on the occurrence of harm caused by AI poisoning, not just potential or future risks or responses. Therefore, the classification is AI Incident.
Thumbnail Image

秒针营销科学院谭北平:GEO非"投毒",而是AI时代的品牌信任建设工程

2026-03-16
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article does not describe a specific AI Incident or AI Hazard but rather provides a detailed commentary and response to concerns about AI data poisoning and GEO. It explains the dual-use nature of the technology, ongoing governance efforts, and technical safeguards, positioning the discussion as complementary information that enhances understanding of AI ecosystem challenges and responses. There is no direct or indirect harm reported, nor a plausible imminent risk of harm from the described activities. Therefore, it fits the definition of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

当AI搜索开始"说谎",谁来阻断GEO引发的"信息污染"?对话中国信通院AI研究所呼娜英

2026-03-16
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves the use and misuse of AI systems (generative AI models) through GEO practices that directly lead to harms including misinformation, financial loss, threats to safety, and degradation of AI knowledge integrity. These harms fall under violations of rights, harm to communities, and harm to property or individuals. Since the article describes ongoing harms caused by GEO's manipulation of AI outputs and the resulting information pollution, it qualifies as an AI Incident. The discussion of governance and mitigation efforts is complementary but does not overshadow the primary focus on realized harms from AI misuse.
Thumbnail Image

马上评|重构AI时代的传播逻辑:"权威"应成为核心竞争力

2026-03-16
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves the use and misuse of AI systems (large language models) whose outputs have been deliberately manipulated through injected false data (GEO) to produce misleading recommendations. This manipulation has directly led to harm by promoting unreliable and potentially harmful products, which can mislead consumers and damage public trust. Therefore, this constitutes an AI Incident as the AI system's use has directly led to harm to communities and users through misinformation and deceptive recommendations. The article also discusses regulatory responses and the evolving AI ecosystem, but the primary focus is on the realized harm caused by AI misuse.
Thumbnail Image

GEO 是什么?如何针对问优 AI 等品牌做 GEO?

2026-03-16
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The content is educational and strategic, focusing on how to optimize content for AI-driven answer engines. It does not report any event where AI caused harm or poses a plausible risk of harm. There is no mention of AI system malfunction, misuse, or any direct or indirect harm resulting from AI. The article is about adapting to AI technologies and improving content visibility within AI-generated answers, which fits the definition of Complementary Information as it provides contextual and strategic information about AI systems and their ecosystem without describing a new AI Incident or AI Hazard.
Thumbnail Image

315曝光GEO乱象:谁在给AI\"投毒\"?

2026-03-16
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves the use and misuse of AI systems that generate recommendations based on internet content. The deliberate creation of false information to manipulate AI outputs has directly led to harm by misleading users and distorting AI-generated recommendations. This constitutes a violation of trust and causes harm to communities through misinformation. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to significant harm through misinformation and manipulation.
Thumbnail Image

AI大模型遭"投毒",解药在哪儿?

2026-03-16
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves the use and misuse of AI systems (large language models) whose outputs are manipulated through data poisoning to produce biased and misleading recommendations. This manipulation directly leads to harm by deceiving consumers and undermining market fairness, which falls under violations of rights and harm to communities. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to significant harm as defined in the framework.
Thumbnail Image

给AI投毒成黑产,传统搜索可以打一场"信任重构"战

2026-03-16
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (large language models) that are manipulated through a coordinated black market service (GEO) to spread false information. This manipulation directly leads to harm by misleading users, damaging consumer rights, and corrupting the AI-generated content ecosystem. The article explicitly states that this results in consumer harm and a trust crisis, which fits the definition of an AI Incident involving violations of rights and harm to communities. Although the AI systems have defenses, the black market's successful partial manipulation constitutes realized harm, not just a potential risk. Hence, the classification as AI Incident is appropriate.
Thumbnail Image

筑牢人工智能"可信"根基:从3·15曝光的"数据投毒"看信息秩序重构

2026-03-16
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves the use and misuse of AI systems, specifically large language models, which are manipulated through 'data poisoning' to produce harmful misinformation. This manipulation leads to harm to communities by undermining information integrity and public trust, which qualifies as harm to communities under the AI Incident definition. The article describes realized harm through the direct impact of manipulated AI outputs on information order and public trust, not just potential harm. Therefore, this qualifies as an AI Incident due to the direct role of AI systems in causing significant harm through manipulated outputs and the systemic risks posed by such data poisoning.
Thumbnail Image

锐评|面对"AI污染",我们更要保住思考判断主动权

2026-03-17
news.bjd.com.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly describes how AI systems (large language models) are being manipulated through targeted injection of false data (GEO) to produce misleading and harmful content, which constitutes harm to communities and users. This manipulation is a misuse of AI systems leading to realized harm, not just a potential risk. Therefore, it qualifies as an AI Incident because the AI system's use and development have directly led to harm through misinformation and deception. The article also discusses responses and governance but the primary focus is on the harm caused by AI pollution.
Thumbnail Image

315曝光的"AI投毒"原理:GEO这样操控大模型推荐_手机网易网

2026-03-16
m.163.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (large language models generating recommendations) and describes how their outputs are manipulated by poisoning training data and retrieval contexts with false information. This manipulation has directly caused harm by misleading users with false product recommendations and distorting market information, which constitutes harm to communities and violation of trust. The article details realized harm, not just potential risk, as AI models have already recommended non-existent products based on poisoned data. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

315曝光的GEO,不应该成为流量的"围猎场"_手机网易网

2026-03-16
m.163.com
Why's our monitor labelling this an incident or hazard?
The article explicitly describes how AI large models are being manipulated by malicious companies through systematic injection of false information ('AI poisoning') to distort AI-generated search results and product recommendations. This manipulation results in AI recommending non-existent or falsely described products, misleading users and causing harm to consumers. The AI system's use and development are directly linked to this harm, fulfilling the criteria for an AI Incident. The harm includes misinformation and deception affecting users and consumers, which is a form of harm to communities and individuals. Hence, the event is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

千元就能控AI回答!3·15曝光"AI 投毒"后 GEO换马甲生意照做_手机网易网

2026-03-16
m.163.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly: AI-powered software generates fake promotional content, which is then captured by AI large language models that use RAG to provide answers. The AI systems' outputs are directly manipulated by the black-market 'GEO optimization' services, leading to the dissemination of false information. This causes harm by misleading users and undermining trust in AI, which fits the definition of harm to communities and violation of rights. The article documents that this harm is ongoing despite exposure and attempts to curb it, confirming realized harm rather than just potential risk. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

警惕AI被投毒越来越笨!封面评:AI需走出垃圾信息迷障

2026-03-16
t.cj.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves the use and misuse of AI systems (large AI models) where malicious actors pay third-party services to manipulate AI outputs to favor their products falsely. This manipulation constitutes a misuse of AI systems that leads to harm in the form of misinformation and degradation of AI quality, which can harm communities and users relying on AI outputs. Since the harm is occurring through the AI system's misuse and is directly linked to the AI's outputs being manipulated, this qualifies as an AI Incident under the framework, specifically harm to communities and the integrity of information.
Thumbnail Image

公募机构争夺AI好感度 业内提示谨防信息污染

2026-03-18
新华网
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems (AI large language models and recommendation engines) used in marketing optimization and content feeding to influence AI recommendations. The concerns raised about "information pollution," algorithmic bias, and potential unfair influence on investors represent plausible future harms related to AI use. Since no actual harm or incident is described as having occurred, but credible risks and calls for governance are emphasized, this fits the definition of an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because AI systems and their impacts are central to the discussion.
Thumbnail Image

AI被"投毒",如何避免上当受骗

2026-03-18
人民网
Why's our monitor labelling this an incident or hazard?
The event involves the use and development of AI systems (large language models) that are being deliberately 'poisoned' with false information to manipulate their outputs. This manipulation has already led to harm by misleading users and distorting information, which can be considered harm to communities and a violation of rights to accurate information. The article does not describe a hypothetical risk but reports on an ongoing issue revealed by a major media investigation, indicating realized harm. Therefore, this qualifies as an AI Incident due to the direct or indirect harm caused by the AI system's manipulated outputs.
Thumbnail Image

3·15晚会点名AI投毒 中广协已启动GEO标准化建设工作

2026-03-17
东方财富网
Why's our monitor labelling this an incident or hazard?
The event involves the use and misuse of generative AI systems (AIGC) that have directly led to harm in the form of misinformation and misleading content affecting consumers and the broader information environment, which constitutes harm to communities and potential violation of rights. The article describes realized harms from AI misuse and the societal response to mitigate these harms through standardization. Therefore, this qualifies as an AI Incident because the AI system's misuse has directly led to harm. The main focus is on the incident of AI-generated misinformation and the response to it, not merely a general update or future risk, so it is not Complementary Information or an AI Hazard.
Thumbnail Image

谁在干预选基结果?基金AI营销游走"灰色地带"

2026-03-17
东方财富网
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly, as it discusses generative AI models used for content optimization to influence AI recommendations in financial product marketing. The use of AI to manipulate recommendation outputs constitutes use of AI systems. Although the article does not report actual realized harm such as investor injury or legal violations, it raises credible concerns about AI algorithm bias, information pollution, and unfair advantage, which could plausibly lead to harm to investors and market fairness. Therefore, the event fits the definition of an AI Hazard, as it plausibly could lead to AI incidents involving harm to communities (misleading investors) and violations of rights (fair market practices). It is not an AI Incident because no direct or indirect harm has yet occurred or been documented. It is not Complementary Information because the article focuses on describing the emerging practice and its risks rather than updates or responses to a prior incident. It is not Unrelated because AI systems are central to the described event.
Thumbnail Image

起底GEO灰色产业链:9.9元就能"投毒"AI大模型 虚假广告如何变成"标准答案"

2026-03-17
东方财富网
Why's our monitor labelling this an incident or hazard?
The event involves the use and misuse of AI systems (large language models) through the GEO software tool, which directly leads to harm by spreading misinformation and false advertising that users may accept as factual. This causes economic harm, violates users' rights to truthful information, and undermines trust in AI systems, fulfilling the criteria for an AI Incident. The article documents realized harms, not just potential risks, and details how the AI's outputs are manipulated to produce misleading answers, which is a direct consequence of the AI system's use and data poisoning. Therefore, this is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

新兴消费领域"暗坑"值得警惕 AI推荐陷阱需谨慎

2026-03-18
中华网科技公司
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (AI large language models) being manipulated through systematic feeding of false information to produce misleading recommendations, which has directly led to consumer harm through false advertising and potential financial loss. This constitutes a violation of consumer rights and harms communities by spreading misinformation and deceptive practices. Therefore, this qualifies as an AI Incident due to realized harm caused by AI system misuse and manipulation.
Thumbnail Image

AI搜索风口下:2026青岛GEO优化公司,青岛博采网络优选服务商指南_中华网

2026-03-18
m.tech.china.com
Why's our monitor labelling this an incident or hazard?
The article describes the use and development of AI systems for marketing optimization (GEO) and emphasizes compliance and best practices. However, it does not report any realized harm, nor does it describe a specific event or circumstance where AI use or malfunction has led or could plausibly lead to harm. It is an informational and promotional piece about the AI marketing ecosystem and service providers, thus fitting the category of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

深度解析"AI投毒"黑灰产业链新快报2026-3-18

2026-03-18
xkb.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves the use and misuse of AI systems (large language models) whose outputs have directly led to harm by spreading false information and recommending non-existent products, which constitutes harm to communities and consumers. The AI system's development and use have been compromised by malicious actors using GEO technology to inject false data, causing the AI to produce misleading outputs. This fits the definition of an AI Incident because the harm is realized and directly linked to the AI system's malfunction or misuse. The article also details the investigation and industry responses, but these are secondary to the primary incident of AI poisoning causing misinformation and consumer harm.
Thumbnail Image

2026年GEO行业洞察:AI流量争夺下的五大服务商技术硬实力解析_天极网

2026-03-17
天极网
Why's our monitor labelling this an incident or hazard?
The content centers on the description of AI systems used in marketing and their industry impact without indicating any realized or potential harm. There is no mention of injury, rights violations, infrastructure disruption, or other harms caused or plausibly caused by AI. The article mainly serves as a sector report and market insight, which fits the definition of Complementary Information by providing context and updates on AI ecosystem developments and governance without reporting an AI Incident or AI Hazard.
Thumbnail Image

深度解析"AI投毒"黑灰产业链

2026-03-18
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event clearly involves AI systems, specifically large language models and AI assistants that generate recommendations based on internet data. The "AI poisoning" is a deliberate misuse of AI training and input data by injecting false content into the internet ecosystem, which AI systems then absorb and reproduce as factual outputs. This has directly led to harm by misleading consumers with false product information, which can cause financial harm and damage trust in AI technologies. The article documents an ongoing investigation and industry responses, but the harm is already occurring. Hence, this qualifies as an AI Incident due to realized harm caused by the AI system's misuse and malfunction in handling poisoned data.
Thumbnail Image

3·15揭露AI"投毒",GEO的"生意经"浮出水面

2026-03-17
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (large language models like DeepSeek and Doubao) whose outputs are manipulated through systematic feeding of false data (GEO technology). This manipulation directly leads to AI recommending fictitious products, misleading consumers and causing harm to communities and market order. The harm is realized, not just potential, as AI-generated recommendations influence consumer decisions. The article also references violations of laws protecting consumers and market fairness. Hence, the event meets the criteria for an AI Incident because the AI system's use and manipulation have directly led to significant harm.
Thumbnail Image

"3·15"晚会曝光大模型被"投毒",你的AI助手可能收了"好处费",专家咋支招?

2026-03-17
新浪财经
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (large language models) whose outputs are manipulated via GEO to recommend fake products, directly causing harm to consumers through misinformation and deceptive advertising. This fits the definition of an AI Incident because the AI system's use has directly led to harm to communities (misinformation and deceptive commercial practices). The article also mentions expert opinions and regulatory actions, but the primary focus is on the realized harm caused by the AI system's manipulated recommendations, not just potential or complementary information.
Thumbnail Image

要像打击非法排污那样打击AI"投毒"

2026-03-17
新浪财经
Why's our monitor labelling this an incident or hazard?
The event involves the use and misuse of AI systems (large language models) whose training data and output are deliberately manipulated through malicious content injection ('AI poisoning'). This manipulation has directly led to harms including consumer deception, economic loss, potential health and safety risks, unfair competition, and damage to societal trust in AI technology. Since the harms are occurring and the AI system's outputs are being manipulated to cause these harms, this qualifies as an AI Incident. The article also discusses governance and regulatory responses, but the primary focus is on the realized harms caused by AI poisoning.
Thumbnail Image

AI"中毒","解药"何在?__南方+_南方plus

2026-03-18
static.nfnews.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (large language models) whose outputs are manipulated by malicious actors using automated tools to generate false content that the AI then recommends to users. This manipulation has directly led to harm by misleading consumers, violating their rights to truthful information, and potentially causing economic and health-related harm through false medical and product recommendations. The contamination of AI training data with false information also represents ongoing harm to the AI ecosystem and users relying on it. Therefore, this qualifies as an AI Incident due to realized harm caused by the use and misuse of AI systems.
Thumbnail Image

大江热议:AI遭"投毒",解药何处寻?

2026-03-17
jiangxi.jxnews.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (large language models) being deliberately manipulated through data poisoning to produce biased and false recommendations, which mislead consumers and harm legitimate businesses. This constitutes a violation of consumer rights and damages market trust, fitting the definition of harm to communities and violations of rights. The AI system's use is central to the harm, as the poisoning directly affects AI outputs that consumers rely on. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Experts raise alarm on AI data poisoning

2026-03-17
China Daily
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (mainstream AI models) whose outputs have been manipulated by feeding them fabricated promotional content via GEO, an AI-related marketing tool. This manipulation has directly led to AI systems recommending a fake product, which constitutes false advertising and violates consumer rights, a breach of legal protections. The harm is realized, not just potential, as AI-generated responses are already affected. The event also discusses regulatory and governance responses but the primary focus is on the incident of data poisoning causing harm. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

AI industry players vow compliance after China's annual 315 Gala uncovers AI 'data poisoning' through GEO technology

2026-03-16
Global Times 环球时报英文版
Why's our monitor labelling this an incident or hazard?
The event involves the use and misuse of AI systems (large language models and AI search engines) whose training data and outputs are being deliberately manipulated through GEO technology to produce false and misleading AI-generated content. This manipulation has directly led to harm by spreading false information and false advertising, infringing consumer rights, and distorting market competition. Therefore, it meets the criteria for an AI Incident due to realized harm linked to AI system misuse. Although the article includes industry statements and regulatory calls, the primary subject is the harmful AI-related practice and its consequences, not just a response or update, so it is not merely Complementary Information.
Thumbnail Image

CMG's consumer gala warns of AI manipulation in search results

2026-03-17
news.cgtn.com
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems (generative AI models for search and recommendation) being manipulated to produce misleading or false outputs that affect users' access to truthful information. This manipulation has already occurred, as evidenced by the exposure of service providers placing misleading content prominently and the documented use of AI-generated disinformation in recent elections. The harms include misinformation, disinformation, and political manipulation, which constitute harm to communities and violations of rights to accurate information. Hence, this qualifies as an AI Incident because the AI system's use has directly led to significant harm.
Thumbnail Image

315 Gala Exposes AI Large Model

2026-03-15
Lookonchain
Why's our monitor labelling this an incident or hazard?
The service described involves the use and manipulation of AI large models by influencing their training data or outputs, which is an AI system's development or use aspect. Although no direct harm has yet occurred, the practice of data poisoning and paid content placement in AI models could plausibly lead to harms such as misinformation, biased or unfair recommendations, and harm to communities or consumers. Therefore, this event represents a credible risk of future harm stemming from AI system misuse or manipulation, fitting the definition of an AI Hazard.
Thumbnail Image

Faked Data Fools AI: How 'Data Poisoning' Makes a Fake Product Go Viral

2026-03-16
City News Service
Why's our monitor labelling this an incident or hazard?
The described practice involves the use of AI systems for product search rankings and the deliberate manipulation of data to influence AI outputs. This results in the AI system presenting false or misleading information as standard answers, directly causing harm to consumers and communities through misinformation. Therefore, this constitutes an AI Incident due to the realized harm from the AI system's outputs being manipulated to deceive users.
Thumbnail Image

China's Annual CCTV Consumer Rights Show Uncovers AI Ad Tricks That Deceive Customers

2026-03-16
yicaiglobal.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems, specifically generative AI models whose outputs are manipulated by GEO service providers. The use of AI-generated answers to mislead consumers and promote false advertisements directly harms consumers by providing deceptive information, which fits the definition of harm to communities and violation of consumer rights. Since the harm is occurring through the use of AI systems and their outputs, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

AI chatbots recommending a fake product? China flags issue of AI 'data poisoning'

2026-03-18
The Straits Times
Why's our monitor labelling this an incident or hazard?
The AI chatbots explicitly involved are generating false product recommendations due to manipulated data inputs ('data poisoning'). This has directly harmed consumers by misleading them with fake product information, which is a violation of consumer rights and advertising laws. The harm is realized and ongoing, as users were served false information and advertisements without awareness. The event also discusses the broader ecosystem of generative engine optimisation (GEO) used to manipulate AI outputs for commercial gain, further supporting the classification as an AI Incident. The presence of AI systems, their use leading to misinformation and deceptive advertising, and the resulting harm to consumers justify this classification.
Thumbnail Image

China Probe: How a Fake Fitness Tracker Became an AI 'Top Pick'

2026-03-17
TechRepublic
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (chatbots) whose outputs were manipulated by AI-generated fake content (GEO) to recommend a non-existent product, causing harm by misleading consumers and violating their rights. The AI system's use was exploited to produce false recommendations, directly leading to harm. The incident is not merely a potential risk but a realized harm, as the fake product was actively recommended by AI chatbots. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

When 'poisoned' AI chatbots recommend fake products to Chinese consumers

2026-03-19
The Star
Why's our monitor labelling this an incident or hazard?
The event involves AI chatbots explicitly mentioned as recommending fake products due to data poisoning, which is a manipulation of the AI system's training or input data. The harm is realized as consumers are misled by false product recommendations, constituting harm to communities and violation of consumer rights. The AI system's outputs directly led to this harm, fulfilling the criteria for an AI Incident. The article also discusses regulatory and governance responses, but the primary focus is on the incident of harm caused by AI misuse and malfunction.
Thumbnail Image

当AI答案被GEO"投毒" 谁来守住AI最优解的可信边界

2026-03-20
news.cn
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (large language models with retrieval-augmented generation) whose outputs have been deliberately manipulated by external actors injecting biased and false content into their data sources. This manipulation has directly led to harms such as consumer deception, economic damage, and erosion of trust, fulfilling the criteria for an AI Incident. The article provides concrete examples of realized harm (e.g., consumers buying defective products based on AI recommendations) and discusses the systemic impact on market fairness and user rights. Therefore, this is not merely a potential risk or complementary information but a documented AI Incident involving harm caused by AI system outputs influenced by malicious data poisoning.
Thumbnail Image

分析:中国AI被"投毒" 背后是信息战新形态 | 大模型 | deepseek | 豆包 | 大纪元

2026-03-19
The Epoch Times
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (large language models) that were fed fabricated data ('AI poisoning') which they then used to generate and recommend false product information to consumers. This misuse of AI directly led to harm in the form of misinformation, commercial fraud, and manipulation of consumer behavior, which harms communities and violates trust. The AI systems' fundamental inability to assess information credibility is a key factor. The event is not merely a potential risk but a realized harm, as the AI systems actively recommended a fake product to users. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

分析:中國AI被「投毒」 背後是信息戰新形態 | 大模型 | deepseek | 豆包 | 大紀元

2026-03-19
The Epoch Times
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (Chinese large language models like DeepSeek and others) that have been deliberately fed false data ('AI poisoning') leading to the generation and recommendation of fabricated products to users. This manipulation directly results in misinformation harm to communities and consumers, fulfilling the criteria for an AI Incident. The AI systems' malfunction in verifying data authenticity and their use in a coordinated information manipulation campaign demonstrate direct harm caused by AI use. The article also discusses the broader implications for information warfare and regulatory responses, but the core event is the realized harm from AI-generated false recommendations, not just a potential risk or complementary information.
Thumbnail Image

「随机鹦鹉」与放大器:3·15曝光AI投毒背后的信息污染-钛媒体官方网站

2026-03-19
tmtpost.com
Why's our monitor labelling this an incident or hazard?
The article explicitly describes how AI systems (large language models) are being manipulated via data poisoning and GEO techniques to produce misleading and false outputs that mislead consumers and degrade information quality. This misuse directly leads to harm in the form of misinformation, manipulation of consumer decisions, and pollution of the information environment, which fits the definition of an AI Incident. The harms include violations of rights (consumer deception) and harm to communities (information pollution). The article also discusses ongoing responses and mitigation efforts but the primary focus is on the realized harms caused by AI misuse, not just potential future risks or complementary information.
Thumbnail Image

AI搜索"双刃剑"如何应对

2026-03-20
人民网
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems in the form of generative AI-powered search platforms and GEO techniques that influence AI-generated search results. It describes realized harms including misinformation ('data pollution'), misleading users through disguised advertising, and unfair competition, all of which impact users' rights and the information ecosystem. These harms are directly linked to the use and misuse of AI systems. The article also discusses governance and mitigation efforts, but the primary focus is on the existing harms caused by AI system use, fitting the definition of an AI Incident rather than a hazard or complementary information.
Thumbnail Image

GEO热潮背后的AI搜索:技术革新须防数据污染

2026-03-19
东方财富网
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (AI search engines and generative AI models) and their use in generating and selecting content. It describes the misuse of GEO techniques to manipulate AI outputs, leading to 'data pollution'—the injection of false or misleading information into AI-generated answers. This misuse could plausibly lead to harms such as misinformation, consumer deception, and unfair market competition, which align with harms to communities and violations of rights under the AI Incident definition. However, the article does not report a specific realized harm or incident but rather warns of the potential and ongoing risks, alongside regulatory and platform efforts to address them. Hence, the event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

AI"标准答案"亟须规则与边界

2026-03-19
东方财富网
Why's our monitor labelling this an incident or hazard?
The article describes a systemic problem involving AI misuse and its consequences but does not detail a particular event where an AI system directly or indirectly caused harm (AI Incident) or a specific event where harm could plausibly occur (AI Hazard). Instead, it focuses on the challenges, ethical concerns, and necessary responses to AI misuse, which fits the definition of Complementary Information as it provides context, analysis, and governance perspectives without reporting a discrete incident or hazard.
Thumbnail Image

AI"投毒"清不了零,我们得习惯长期共存

2026-03-19
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The event clearly involves AI systems, specifically large language models and AI recommendation algorithms, which are manipulated through mass-produced content to distort information and mislead users. This manipulation has directly led to harm by spreading misinformation and undermining users' ability to make accurate judgments, which constitutes harm to communities and a violation of rights to truthful information. The article provides concrete examples of AI-generated content promoting fake products and discusses the societal impact of such AI-driven misinformation. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

给AI大模型"投毒":"如果有人可以保证效果,那一定是在骗人"

2026-03-19
南方网
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly: large language models integrated with search engines that generate answers based on retrieved information. The 'GEO' practice manipulates the input data to these AI systems, causing them to produce biased or false outputs. This manipulation has already resulted in the dissemination of false information (e.g., the fake 'Apollo-9' smart band with fabricated features), which harms communities by misleading users and undermining trust. The article describes realized harm, not just potential risk, and the AI system's role is pivotal in this harm. Therefore, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

AI"投毒"清不了零,我们得习惯长期共存

2026-03-19
app.myzaker.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (large language models and generative AI) and their misuse to produce and amplify misleading content ('AI poisoning'). This misuse has directly led to harm by distorting information ecosystems, misleading consumers, and degrading users' cognitive and decision-making abilities. The article provides concrete examples of harm, such as fake product promotion and addiction to algorithmically recommended content causing health and social issues. The harm is realized and ongoing, not merely potential, fulfilling the criteria for an AI Incident. The article also discusses societal responses and challenges but the primary focus is on the harm caused by AI misuse, not just complementary information or future hazards.
Thumbnail Image

AI营销下半场,GEO凭什么成为品牌核心基建?答案在这场沙龙里!

2026-03-19
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article centers on a professional salon event addressing the challenges and opportunities of GEO in AI marketing, especially following a prior AI incident (the 3·15 AI poisoning event). It focuses on clarifying misconceptions, promoting compliance, and discussing future directions for the industry. No new AI Incident or AI Hazard is described; rather, the article provides updates on industry governance, standards development, and collective efforts to mitigate risks and foster healthy AI marketing practices. Therefore, it fits the definition of Complementary Information, as it enhances understanding of AI ecosystem responses without reporting a new harm or plausible harm event.
Thumbnail Image

给AI"洗脑"已成产业链

2026-03-19
新浪财经
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (large language models) that is manipulated through deliberate data poisoning by the GEO service providers. The AI system's outputs directly reflect fabricated and misleading information, which is presented to consumers as factual, causing harm to communities through misinformation and deceptive advertising. The article provides concrete examples of this manipulation and its effects, demonstrating realized harm rather than just potential risk. Hence, it meets the criteria for an AI Incident, as the AI system's use and malfunction (due to poisoned training or input data) have directly led to harm.
Thumbnail Image

一边是效率提升,一边是数据污染, AI搜索"双刃剑"如何应对?

2026-03-20
新浪财经
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (generative AI integrated with search engines) whose use has directly led to harms including misinformation dissemination, violation of consumer rights (lack of advertisement disclosure), and unfair competition practices. These harms fall under violations of rights and harm to communities. Since these harms are occurring and the article describes concrete negative impacts caused by AI misuse, this qualifies as an AI Incident. The article also discusses governance and mitigation efforts, but the primary focus is on the realized harms from AI misuse in GEO.
Thumbnail Image

当AI被"投毒" 如何守住智能时代的信息真相?

2026-03-20
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (large language models generating answers based on internet data). The misuse of AI through data poisoning and GEO techniques leads to the AI producing false and misleading outputs that have real-world harmful consequences, such as consumer fraud and misinformation. The harm is realized and ongoing, not merely potential. The article details how these manipulations have already caused AI to output fabricated product details and direct users to fraudulent contacts, which fits the definition of an AI Incident due to violations of rights and harm to communities. The article also discusses governance and legal responses, but the primary focus is on the realized harm caused by AI misuse.
Thumbnail Image

AI"投毒"清不了零,我们得习惯长期共存_手机网易网

2026-03-19
m.163.com
Why's our monitor labelling this an incident or hazard?
The article explicitly describes how AI systems (large language models and recommendation algorithms) are manipulated by generated content to produce misleading outputs that influence user decisions and degrade information quality. This manipulation has already caused harm by spreading misinformation and undermining users' ability to make informed judgments, which constitutes harm to communities and a violation of rights. The involvement of AI systems is clear and central to the issue. Although the harm is non-physical, it is significant and ongoing. The article does not merely warn of potential harm but documents an existing systemic problem. Hence, the event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

中国AI被"投毒" 背后是信息战新形态

2026-03-19
botanwang.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (large language models) that have been deliberately fed false data ('AI poisoning') to generate and promote fake products, which directly misleads consumers and constitutes commercial fraud and misinformation. The AI systems' outputs have caused harm by spreading false information and manipulating consumer behavior, fulfilling the criteria for an AI Incident under harm to communities and violation of rights. The article explicitly states the AI systems recommended a non-existent product based on fabricated data, showing direct involvement of AI use leading to harm. The warnings about information warfare and systemic manipulation further support the classification as an AI Incident rather than a mere hazard or complementary information.
Thumbnail Image

面对信息污染,你还敢"一键AI"吗 | 日新说

2026-03-19
news.bjd.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (generative AI) whose outputs have been manipulated through targeted input of false information, resulting in misleading recommendations to users. This manipulation constitutes a misuse of the AI system leading to harm to consumers and communities through misinformation and potential economic damage. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm.
Thumbnail Image

GEO 优化服务商避坑指南:2026 年企业合作必看筛选标准_天极网

2026-03-21
天极网
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems, specifically generative AI and optimization engines used for brand information distribution and recommendation. It discusses the development, deployment, and evaluation of these AI systems but does not describe any realized harm or plausible imminent harm resulting from their use or malfunction. The focus is on advising enterprises on how to select trustworthy AI service providers to mitigate risks and ensure compliance, which aligns with providing governance and ecosystem context. Hence, it fits the definition of Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

2026年geo优化公司推荐:专业服务机构构建AI时代权威认知分析

2026-03-21
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems as it discusses AI semantic understanding, multi-platform AI algorithm adaptation, and AI-driven optimization services. However, it does not describe any event where the development, use, or malfunction of these AI systems has directly or indirectly caused harm or violation of rights. Nor does it indicate any plausible future harm or hazard from these AI systems. Instead, it provides complementary information about the AI ecosystem, market landscape, and strategic considerations for enterprises leveraging AI in brand optimization. Therefore, it fits the definition of Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

【網人網事】AI投毒 - 香港文匯網

2026-03-22
香港文匯網
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems, particularly large language models or generative AI that rely on internet data for training and response generation. The described 'AI poisoning' involves deliberate manipulation of AI training data and outputs, which could plausibly lead to harms such as misinformation, erosion of trust, and harm to communities. No specific incident of harm is reported as having occurred yet; rather, the article serves as a warning about the potential and ongoing risk. Thus, it fits the definition of an AI Hazard, not an AI Incident. It is not merely complementary information because the main focus is on the risk and mechanism of harm, not on responses or updates. It is not unrelated because AI systems and their vulnerabilities are central to the discussion.
Thumbnail Image

AI年代:數碼營銷商助商戶「被AI看到」 - 20260323

2026-03-22
明報新聞網 - 即時新聞 instant news
Why's our monitor labelling this an incident or hazard?
The article primarily reports on the development and market adoption of AI-powered marketing tools designed to optimize content for AI language models. There is no indication of any direct or indirect harm caused by these AI systems, nor any plausible risk of future harm described. The content is informational about AI's impact on marketing and the evolution of SEO practices, which fits the definition of Complementary Information as it provides context and updates on AI ecosystem developments without describing an AI Incident or AI Hazard.
Thumbnail Image

当AI开始"说谎",一场信任危机正在蔓延

2026-03-22
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (AI large language models used for search and recommendation) and describes a malicious use of AI input data ('AI poisoning') that causes the AI to output false and misleading information. This manipulation leads to harm by deceiving users, causing potential financial loss, and undermining trust in AI systems, which is harm to communities and individuals. The article details an active, ongoing harm caused by the AI system's outputs being manipulated, meeting the criteria for an AI Incident. The harm is realized, not just potential, and the AI system's role is pivotal in the dissemination of false information.
Thumbnail Image

小心被骗!AI遭"投毒"后还可信吗?

2026-03-22
news.bjd.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (large AI models) and describes their use being maliciously manipulated ('poisoned') through GEO techniques to produce false outputs. The harms include misinformation dissemination, misleading users, undermining public trust, and potential damage to public interest and security, which fall under harm to communities and violation of rights. The article reports that these harms are occurring, not just potential, making it an AI Incident rather than a hazard. The detailed description of the malicious use and its consequences fits the definition of an AI Incident as the AI system's use has directly led to harm.