AI-Generated Misinformation About Wang Yibo Highlights Risks of Model Hallucination and Media Amplification

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

AI models, including DeepSeek, generated and spread false claims linking actor Wang Yibo to a criminal case and a fabricated apology, which were then amplified by media without verification. This incident demonstrates how AI-generated misinformation can harm reputations and pollute the information ecosystem through feedback loops and commercial manipulation.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article reports that an AI system (DeepSeek) produced fabricated content falsely claiming an official apology and a court judgment that do not exist. This AI-generated misinformation has caused reputational harm to the individual involved and confusion in the public domain. Since the AI system's malfunction (fabrication of false information) directly led to harm (misinformation and reputational damage), this event meets the criteria for an AI Incident.[AI generated]
AI principles
AccountabilityRobustness & digital securitySafetyTransparency & explainabilityDemocracy & human autonomyHuman wellbeing

Industries
Media, social platforms, and marketing

Affected stakeholders
OtherGeneral public

Harm types
ReputationalPublic interest

Severity
AI incident

Business function:
Marketing and advertisement

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

DeepSeek给王一博道歉是假的 AI为什么会出错呢?

2025-07-04
finance.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article reports that an AI system (DeepSeek) produced fabricated content falsely claiming an official apology and a court judgment that do not exist. This AI-generated misinformation has caused reputational harm to the individual involved and confusion in the public domain. Since the AI system's malfunction (fabrication of false information) directly led to harm (misinformation and reputational damage), this event meets the criteria for an AI Incident.
Thumbnail Image

DeepSeek对"王一博案"道歉:假新闻!

2025-07-04
驱动之家
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (large language models) that have generated and spread false information about DeepSeek apologizing for a scandalous association. This misinformation has caused reputational harm and public confusion, which qualifies as harm to communities and individuals' rights. The AI's malfunction in generating and amplifying false content is central to the incident. Therefore, it meets the criteria for an AI Incident due to indirect harm caused by AI-generated misinformation and reputational damage.
Thumbnail Image

DeepSeek向王一博道歉?今天,很多人被骗了

2025-07-04
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system, DeepSeek, which generated false statements and fabricated legal references that were widely disseminated, misleading many people. This misinformation caused reputational harm to individuals and social confusion, which fits the definition of harm to communities. The AI's malfunction in producing false content directly led to this harm. The event is not merely a potential risk but a realized incident of harm caused by AI outputs, thus classifying it as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

AI随便一编,一波媒体真信了

2025-07-04
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system to generate false and misleading content that was mistaken as true by multiple media outlets, leading to reputational harm to Wang Yibo. This constitutes a violation of rights (reputational harm) and harm to communities (misinformation affecting public trust). The AI system's use directly led to these harms through the creation and spread of fabricated content. Therefore, this qualifies as an AI Incident under the definitions provided.
Thumbnail Image

2025-07-04
guancha.cn
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek's AI chat assistant) generating false content (hallucinations) that was widely disseminated as true, causing reputational harm and misinformation. This fits the definition of an AI Incident because the AI system's use directly led to harm (misinformation and reputational damage). The article details the harm caused by the AI hallucination and its societal impact, not just a potential risk or a complementary update. Therefore, it qualifies as an AI Incident.
Thumbnail Image

DeepSeek给王一博道歉是假的 AI谣言泛滥引关注

2025-07-04
中华网科技公司
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems generating false apology statements and fake legal documents that were widely spread, misleading the public and damaging the reputation of individuals such as Wang Yibo. It also references prior incidents where AI-generated defamatory content caused significant personal harm. The AI system's use in generating and spreading false information has directly led to harm to individuals' reputations and social trust, fulfilling the criteria for an AI Incident under violations of rights and harm to communities. The harm is realized, not just potential, and the AI system's role is pivotal in causing this harm.
Thumbnail Image

DeepSeek向王一博道歉 AI生成的乌龙声明引发关注

2025-07-04
中华网科技公司
Why's our monitor labelling this an incident or hazard?
An AI system generated a false apology statement that was mistaken for a real one, causing media to spread misinformation. However, the article does not report any direct or indirect harm resulting from this misinformation, such as reputational damage confirmed by legal action, health harm, or rights violations. The event shows a plausible risk of harm from AI-generated misinformation but does not confirm that harm has materialized. Therefore, it is best classified as an AI Hazard, reflecting the plausible future harm from AI-generated false content and misinformation dissemination.
Thumbnail Image

DeepSeek给王一博道歉?很多媒体都被骗了,知道原因我笑了

2025-07-04
app.myzaker.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek) generating false content that was taken as factual by media outlets, leading to reputational harm to Wang Yibo and misinformation spreading in the community. This constitutes harm to communities and individuals' rights (reputation and possibly privacy), fulfilling the criteria for an AI Incident. The AI system's use (generation of false apology statements) directly led to the harm. Although the harm is reputational and informational rather than physical, it is significant and clearly articulated. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

"DeepSeek向王一博道歉"揭示AI污染产业链:"内容农场"大批量生产信息垃圾,1.38万元就能买通大模型推荐

2025-07-04
每日经济新闻
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI large language models (AI systems) that generate and spread false information, creating a feedback loop of misinformation ('AI-generated false news -> media propagation -> AI learning and re-spreading'). This has caused realized harm by misleading users and polluting the information ecosystem, which is harm to communities. Additionally, the commercial practice of buying AI recommendation placements to promote biased or false content further exacerbates this harm. The event clearly meets the definition of an AI Incident because the AI systems' use and malfunction have directly led to significant harm. The article also discusses responses and recommendations, but the primary focus is on the incident of misinformation generation and spread by AI, not just complementary information or potential hazards.
Thumbnail Image

虚假信息面前,AI为何"智商下线"?_文体娱教_红辣椒评论

2025-07-04
红网
Why's our monitor labelling this an incident or hazard?
The event involves an AI system generating false content that directly led to reputational harm and misinformation spreading, which constitutes harm to communities and individuals' rights. The AI's malfunction or limitation in verifying facts caused the incident. The widespread dissemination of this AI-generated false information and its impact on public trust and information order meets the criteria for an AI Incident under the OECD framework, as the harm is realized and directly linked to the AI system's outputs.
Thumbnail Image

DeepSeek给王一博道歉是假的,但要警惕AI谎言泛滥是真的

2025-07-04
华龙网
Why's our monitor labelling this an incident or hazard?
The article does not report a concrete AI Incident where harm has directly or indirectly occurred due to an AI system's malfunction or misuse in this specific case; the apology was fabricated and no actual apology or harm from DeepSeek's AI system is confirmed. It also does not describe a specific AI Hazard event with plausible future harm beyond the general risk of AI misinformation. The main focus is on raising awareness and urging responses to the broader problem of AI-generated falsehoods, making it Complementary Information that contextualizes and informs about AI-related societal challenges rather than reporting a discrete incident or hazard.
Thumbnail Image

DeepSeek给王一博道歉是假的,但要警惕AI谎言泛滥是真的

2025-07-04
金羊网
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating false and misleading content that harms individuals' reputations and spreads misinformation, which fits the definition of harm to communities and individuals. However, the article focuses on the general phenomenon of AI misinformation proliferation and the risks it poses, rather than a single concrete incident with direct causation of harm by an AI system. It also discusses societal and governance responses and the need for improved regulation. Therefore, this is best classified as Complementary Information, providing context and highlighting the ecosystem challenges related to AI harms, rather than reporting a new AI Incident or AI Hazard.
Thumbnail Image

章驰 反转又反转,DeepSeek给王一博道歉?假!

2025-07-04
21jingji.com
Why's our monitor labelling this an incident or hazard?
An AI system (DeepSeek's model) generated false content (hallucination) that was spread by media, causing misinformation and potential reputational harm. However, the AI system did not directly cause verified harm or an official incident; the harm is indirect and based on misinformation circulation. Since the AI's false output led to reputational damage concerns and misinformation spread, this qualifies as an AI Incident due to indirect harm to reputation and misinformation affecting communities. The event is not merely a hazard or complementary information because the AI's false outputs have already caused misinformation and reputational harm in public discourse.
Thumbnail Image

关注!王一博不该无辜躺枪

2025-07-05
新浪财经
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Deepseek) that generated false rumors about Wang Yibo, which caused harm to his reputation and personal rights. The use of AI to create and spread false information that damages individuals' reputations constitutes a violation of rights and harm to communities. Since the harm has already occurred and is described as significant, this qualifies as an AI Incident under the framework.
Thumbnail Image

AI假截图识别指南发布,技术逻辑权威三结合助防骗

2025-07-04
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article focuses on the use and potential misuse of AI systems that generate fake screenshots, highlighting the risks of misinformation and the need for verification. However, it does not describe a specific event where harm has occurred or is imminent due to AI system malfunction or misuse. Instead, it offers preventive guidance and awareness about AI-generated fake content, which is complementary information enhancing understanding of AI-related risks and responses. Therefore, it fits the definition of Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

AI幻觉生成虚假道歉声明剖析,DeepSeek事件揭示技术风险核心

2025-07-04
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event describes how AI systems (large language models) generated false and misleading content (fake apology statements) that were then spread and reinforced through media and further AI interactions, causing harm to individuals (e.g., reputational damage) and communities (misinformation). The AI system's use and misuse directly led to these harms, fulfilling the criteria for an AI Incident. The article also discusses the broader societal and ethical risks posed by such AI hallucinations and data poisoning, but the primary focus is on realized harm from AI-generated false content, not just potential harm or general commentary. Therefore, this is classified as an AI Incident.
Thumbnail Image

李爱庆案关联王一博谣言持续发酵,饭圈斗争与AI信息漏洞成推手

2025-07-04
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems generating false content and apology statements that were never officially released, which were then disseminated by media and fan groups, causing ongoing misinformation and social harm. The AI's malfunction (content audit failure) and misuse (feeding AI with fake evidence) directly led to reputational harm and social disruption, fulfilling the criteria for an AI Incident. The harm is realized and ongoing, not merely potential, and involves violations of rights to truthful information and harm to communities through misinformation. Hence, the event is best classified as an AI Incident.
Thumbnail Image

AI法律文书真伪识别难,网民三招防伪:观细节、核权威、守常识

2025-07-04
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article does not describe a specific AI Incident or AI Hazard event but rather offers complementary information about the challenges and risks of AI-generated legal documents and how to guard against them. It focuses on educating users about AI's limitations and potential misuse without reporting a concrete harm or a direct plausible future harm event. Therefore, it fits the definition of Complementary Information, as it supports understanding and risk awareness related to AI systems in the legal domain.
Thumbnail Image

DeepSeek致歉王一博系AI伪造, 多方核查声明无真实依据

2025-07-04
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves an AI system generating false statements based on manipulated inputs, leading to reputational harm and misinformation dissemination, which qualifies as harm to communities and violation of rights. The AI system's malfunction (being misled by false inputs) and its use in spreading false content directly caused harm. Therefore, this is an AI Incident rather than a hazard or complementary information, as the harm has already occurred and is ongoing.
Thumbnail Image

网传DeepSeek向王一博道歉不实,官方辟谣系粉丝诱导AI生成虚假信息

2025-07-04
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article details how AI-generated false content (apology statements) was mistaken as official, causing reputational harm to an individual (a public figure) and leading to legal actions against malicious actors who manipulated AI inputs to spread defamatory content. The AI system's malfunction or misuse (data pollution and hallucination) directly contributed to the harm. This fits the definition of an AI Incident as it involves violations of rights and harm to communities through misinformation. The event is not merely a potential risk or complementary information but a realized harm caused by AI outputs.
Thumbnail Image

恶意AI数据污染威胁数字社会,公众亟需掌握识别技巧防虚假信息误导

2025-07-04
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems in the context of data pollution and misinformation generation, which can cause harm to communities by misleading the public. However, it does not describe a new or ongoing AI Incident where harm has already occurred directly or indirectly, nor does it report a specific event that plausibly could lead to harm imminently (AI Hazard). Instead, it provides guidance, awareness, and recommendations for detection, prevention, and legal recourse, which aligns with the definition of Complementary Information. The focus is on societal and governance responses and public education rather than a direct incident or hazard.
Thumbnail Image

虚假信息污染AI系统酿乌龙,DeepSeek王一博道歉事件终证伪

2025-07-04
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating false content due to malicious data pollution and user manipulation, which directly led to reputational harm and legal disputes. The AI's role in producing and amplifying misinformation is pivotal, fulfilling the criteria for an AI Incident under violations of rights and harm to communities. The harm is realized, not just potential, and the AI system's malfunction or misuse is central to the incident. Hence, it is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

AI新政严控假新闻,治理成效取决于技术漏洞修补、法律执行强化及人为因素优化

2025-07-04
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly addresses harms caused by AI systems generating fake news and misinformation, which affect communities and individuals (harm to communities and violation of rights). It discusses legal penalties, platform responsibilities, and technical measures to mitigate these harms, indicating that AI-generated misinformation is an ongoing issue causing real harm. Therefore, this qualifies as an AI Incident because the development and use of AI systems have directly or indirectly led to harm through the spread of false information. The article is not merely about potential risks or policy announcements without harm, so it is not an AI Hazard or Complementary Information. It is not unrelated because AI systems are central to the issue.
Thumbnail Image

AI虚假信息频袭内娱明星,王一博名誉维权案成典型案例

2025-07-04
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated deepfake videos and synthetic voices used to spread false information, defamatory content, and fraudulent advertisements targeting celebrities. These uses of AI have directly led to harm including reputational damage, fraud against consumers, and social harm such as harassment and defamation. The involvement of AI in creating and disseminating this harmful content meets the criteria for an AI Incident because the harms have occurred and are ongoing. The article also discusses legal and societal responses, but the primary focus is on the realized harms caused by AI misuse.
Thumbnail Image

DeepSeek向王一博道歉"揭示AI污染产业链:"内容农场"大批量生产信息垃圾,1.38万元就能买通大模型推荐

2025-07-04
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (large language models) generating and disseminating false information that has been accepted as true by users and media, leading to misinformation harm. The AI's role is pivotal in creating and amplifying the falsehoods, fulfilling the criteria for an AI Incident under harm to communities and violation of rights to truthful information. The article also documents the commercial exploitation of AI recommendation systems, which directly contributes to the spread of misinformation. The harms are realized, not just potential, and the AI systems' malfunction or misuse is central to the incident.
Thumbnail Image

DeepSeek向王一博道歉是AI编的 大模型幻觉引发热搜假案

2025-07-04
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek AI chat assistant) generating fabricated content (an apology statement) that was falsely interpreted as a real company statement. This misinformation spread widely, causing reputational harm and misleading the public, which is a form of harm to communities. The AI system's hallucination directly led to this harm, fulfilling the criteria for an AI Incident. The event is not merely a potential risk but a realized harm scenario, so it is not an AI Hazard or Complementary Information. It is not unrelated because the AI system's role is pivotal in causing the misinformation.
Thumbnail Image

"DeepSeek对王一博道歉"竟是AI编的?大模型幻觉引发热搜假案

2025-07-04
新浪财经
Why's our monitor labelling this an incident or hazard?
The article details how DeepSeek's AI chatbot generated a false apology statement that was mistaken for an official company statement and widely spread, causing misinformation and reputational harm. The AI system's hallucination is the direct cause of the false news dissemination. This meets the definition of an AI Incident because the AI system's malfunction (hallucination) directly led to harm (misinformation and reputational damage). Although the company denied issuing any apology, the AI-generated content was treated as factual by the public and media, fulfilling the criteria for harm to communities and individuals. Hence, the event is classified as an AI Incident.
Thumbnail Image

AI伪造明星道歉事件泛滥成灾,技术滥用与信息污染交织普遍

2025-07-04
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating false and legally styled content that harms celebrities' reputations and misleads the public, fulfilling the criteria for an AI Incident due to direct harm to individuals' rights and communities. The article documents realized harm (defamation, legal cases, misinformation spread) caused by AI misuse, not just potential harm. Therefore, it qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

DeepSeek给王一博道歉是假的 AI为什么会出错呢?

2025-07-04
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek) that produced fabricated and false content about a legal case and an apology that never occurred. This misinformation has harmed the reputation of the individual involved (Wang Yibo) and misled the public, which fits the definition of harm to communities and violation of rights. The AI system's malfunction directly led to this harm, fulfilling the criteria for an AI Incident rather than a hazard or complementary information. The article also discusses the broader implications of AI errors and misinformation, but the core event is the AI-generated false apology and legal document, which caused realized harm.
Thumbnail Image

DeepSeek致歉王一博系不实信息,AI虚假内容治理警钟敲响

2025-07-04
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating false statements ('hallucinations') that were then disseminated as fake news, causing harm to individuals' reputations and misleading the public. This constitutes harm to communities and a violation of rights through misinformation. The AI's role in producing and amplifying false content is direct and pivotal. The article also discusses legal actions and governance responses, but the primary focus is on the realized harm caused by AI-generated false information. Therefore, this qualifies as an AI Incident.
Thumbnail Image

官媒集体翻车!AI伪造DeepSeek王一博道歉信,数字时代真相何在?

2025-07-04
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating false content (AI-generated fake apology letters and fake legal documents) that directly led to harm in the form of misinformation spreading widely across social media and mainstream media, causing harm to communities by undermining trust in media and legal institutions. The AI's role is pivotal as it created and amplified false information, and the misuse of AI-generated content by fans and media further exacerbated the harm. This fits the definition of an AI Incident because the AI system's use directly led to violations of rights (misinformation affecting public trust and legal credibility) and harm to communities (public confusion and erosion of trust).
Thumbnail Image

DeepSeek给王一博道歉是假的,但要警惕AI谎言泛滥是真的

2025-07-04
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating false content (AI-generated fake apology and misinformation) that has caused reputational harm and social confusion, which constitutes harm to individuals and communities. The article describes realized harm from AI-generated misinformation and its impact on people's reputations and mental health. Therefore, this qualifies as an AI Incident because the AI system's use has directly or indirectly led to harm through misinformation and reputational damage. The article also discusses responses and challenges but the primary focus is on the harm caused by AI-generated false information.
Thumbnail Image

AI"假道歉"引爆全网,DeepSeek道歉事件背后藏着什么?

2025-07-04
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves an AI system generating false and misleading content that directly caused reputational harm and misinformation spread, which qualifies as harm to communities and violation of rights (reputation). The AI hallucination (malfunction) is the root cause of the false content. The widespread dissemination and legal responses confirm the harm has materialized. Therefore, this is an AI Incident.
Thumbnail Image

极目锐评|DeepSeek给王一博道歉是假的,但要警惕AI谎言泛滥是真的

2025-07-04
新浪财经
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating false and misleading content that has caused reputational harm and misinformation spread, which constitutes harm to communities and individuals. The false apology incident itself is a manifestation of AI-generated misinformation causing harm. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm through misinformation and reputational damage. The article also discusses broader societal impacts and responses, but the primary focus is on realized harm from AI-generated falsehoods.
Thumbnail Image

DeepSeek、王一博和李爱庆的瓜,让一众媒体翻了车_手机网易网

2025-07-04
m.163.com
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system that generated or associated false content linking Wang Yibo to a criminal case, which damaged his reputation. The AI-generated apology statement was fabricated, and media outlets reported it without verification, amplifying the harm. The AI system's malfunction or misuse directly led to reputational harm, a violation of legal rights (name and reputation rights), which fits the definition of an AI Incident. The event is not merely a potential hazard or complementary information but a realized harm caused by AI outputs and their propagation.
Thumbnail Image

DeepSeek向明星道歉,起底闹剧背后的真相-36氪

2025-07-07
36氪:关注互联网创业
Why's our monitor labelling this an incident or hazard?
The article details how DeepSeek, an AI large language model, generated false statements linking Wang Yibo to a corruption case, which were mistaken as official statements and widely spread, causing reputational damage. This is a direct harm to the individual's rights and to the community through misinformation. The AI system's malfunction (generating false content) and its use by fans to create fake apology statements are central to the incident. Therefore, this qualifies as an AI Incident due to realized harm caused by the AI system's outputs and their misuse.
Thumbnail Image

这届媒体人,拼命用AI,拼命被AI"耍"丨时评

2025-07-07
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating false and misleading content that was taken as factual by media and the public, causing harm to individuals' reputations and misleading communities. The AI's role in producing and perpetuating misinformation, combined with human overreliance and lack of verification, directly led to harm consistent with violations of rights and harm to communities. Therefore, this qualifies as an AI Incident due to realized harm caused by AI-generated misinformation and its dissemination.
Thumbnail Image

DeepSeek又惹祸了?画面不敢想_手机网易网

2025-07-05
m.163.com
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems (DeepSeek and other large language models) generating false or misleading content (hallucinations) that have led to misinformation and reputational harm, which constitutes harm to communities and individuals. However, the article focuses on describing the general problem and ongoing harms rather than a single discrete event or incident. It also discusses regulatory and personal responses, which are complementary information. Since the harms are ongoing and realized, this fits best as an AI Incident due to the direct harm caused by AI-generated misinformation. The article does not describe a new hazard or only potential harm, nor is it solely about responses or updates, so it is not Complementary Information. Therefore, the classification is AI Incident.
Thumbnail Image

DeepSeek 向王一博道歉 知道真相后我乐了

2025-07-13
驱动之家
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek's AI) generating false content that caused reputational harm to Wang Yibo, which is a violation of rights (harm to a person). The AI system's use directly led to misinformation being spread and misinterpreted as official statements, causing harm. The article also discusses the broader societal impact of unverified AI-generated content, reinforcing the incident's significance. Since harm has occurred and is linked to the AI system's output, this is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

DeepSeek向王一博道歉?知道真相后我乐了

2025-07-13
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The AI system DeepSeek generated false content that harmed Wang Yibo's reputation, which constitutes harm to a person. The incident arose from the AI's content generation malfunction (producing misinformation) and the misuse or misunderstanding of that output by media and users. This fits the definition of an AI Incident because the AI system's use directly led to harm (defamation). The article also highlights broader societal issues with overreliance on AI-generated content, but the core event is the misinformation causing reputational harm. Therefore, this is classified as an AI Incident.
Thumbnail Image

中美新一轮竞争中,AI究竟扮演什么角色?谁更有优势?清华大学张亚勤解读

2025-07-13
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The article centers on expert analysis and commentary about AI's role in geopolitical competition and technological progress, without reporting any incident or hazard involving AI systems causing or potentially causing harm. It discusses AI innovations, applications, and ecosystem building, which are informative and contextual but do not meet the criteria for AI Incident or AI Hazard. Therefore, it is best classified as Complementary Information, as it enhances understanding of the AI ecosystem and strategic developments without describing a harmful event or credible risk of harm.
Thumbnail Image

DeepSeek使用率由50%跌至3% R2模型發布時間延遲(圖) - 大陸時政 -

2025-07-13
看中国
Why's our monitor labelling this an incident or hazard?
The article discusses the decline in usage and delayed release of an AI system due to technical and resource challenges, including hallucination problems in the model. While hallucinations (false information generation) can be harmful, the article does not report any actual incidents of harm or disruption caused by the AI system. There is also no mention of plausible future harm or credible risk scenarios arising from these issues. Therefore, this is not an AI Incident or AI Hazard. The article mainly provides context on the AI ecosystem and competitive landscape, making it Complementary Information.
Thumbnail Image

西方开始抹黑DeepSeek了!_手机网易网

2025-07-14
m.163.com
Why's our monitor labelling this an incident or hazard?
The article centers on reputational and market competition issues related to DeepSeek, an AI system, but does not describe any event where the AI system's development, use, or malfunction has led to injury, rights violations, infrastructure disruption, or other harms. The criticisms and data comparisons are about performance and market share, not about harm caused or plausible harm. The geopolitical and media bias discussion is contextual and does not constitute an AI Incident or AI Hazard. Therefore, this is best classified as Complementary Information, providing context and societal/governance responses related to AI development and competition.
Thumbnail Image

林世賢轟彰縣府擬用中國DeepSeek寫公文 行政處 : 內部提報但未簽准 - 彰化縣 - 自由時報電子報

2025-07-15
Liberty Times Net
Why's our monitor labelling this an incident or hazard?
The article explicitly states that the AI tool DeepSeek was only proposed internally but never approved or used, and that the government has blocked access to it. No harm or data leakage has been reported. The AI system's involvement is limited to a suggestion stage, with no deployment or malfunction. The concern is about plausible future harm related to data security and national security risks if such a tool were used. Hence, this qualifies as an AI Hazard, as the development and potential use of this AI system could plausibly lead to harm, but no harm has yet occurred.
Thumbnail Image

林世賢轟彰縣府擬用中國DeepSeek寫公文 行政處指不是事實 - 政治 - 自由時報電子報

2025-07-15
Liberty Times Net
Why's our monitor labelling this an incident or hazard?
An AI system (DeepSeek) is explicitly mentioned as being proposed for use in official document writing, which involves AI-generated content. The event stems from the potential use (use phase) of this AI system. Although no actual use or harm has occurred, the suggestion to use a high-risk AI tool from China in government operations could plausibly lead to an AI Incident involving data leakage or national security breaches. Therefore, this qualifies as an AI Hazard rather than an AI Incident. The government's denial and preventive measures indicate no realized harm yet, but the risk is credible and significant.
Thumbnail Image

遭指建議使用DeepSeek 彰縣府:內部提議未簽准 | 聯合新聞網

2025-07-15
UDN
Why's our monitor labelling this an incident or hazard?
An AI system (DeepSeek) is mentioned as a proposed tool for assisting in document and speech writing, indicating AI involvement. However, the tool was not actually adopted or used, and no harm occurred. The concern is about potential cybersecurity risks and national security implications if it were used, but since it was not implemented, no direct or indirect harm has materialized. Therefore, this event represents a plausible risk scenario (hazard) rather than an incident. However, since the tool was not used and no harm occurred, it is best classified as Complementary Information about internal governance and risk management related to AI tool adoption, rather than an AI Hazard or Incident.
Thumbnail Image

建議使用中央禁用的DeepSeek?彰化縣府強調僅內部提議未被採用 | 聯合新聞網

2025-07-15
UDN
Why's our monitor labelling this an incident or hazard?
An AI system (DeepSeek) is explicitly mentioned as being proposed for use. However, the proposal was not implemented, and no harm or data breach has occurred. The event concerns a potential risk if the AI tool were used, but since it was not adopted, no incident has taken place. Therefore, this qualifies as an AI Hazard because the use of the AI system could plausibly lead to harm (cybersecurity risks, data exposure) if it were used, but no harm has materialized yet.
Thumbnail Image

遭指建議使用DeepSeek 彰縣府:內部提議未簽准 | 地方 | 中央社 CNA

2025-07-15
Central News Agency
Why's our monitor labelling this an incident or hazard?
An AI system (DeepSeek) is mentioned as a proposed tool for assisting with document and speech writing, indicating AI involvement. However, the tool was never actually adopted or used by the government office, and no harm occurred. The main issue is the potential cybersecurity risk if it had been used, but since it was not implemented, no direct or indirect harm has materialized. Therefore, this event represents a plausible risk scenario (AI Hazard) rather than an incident. The article focuses on the internal proposal and the preventive measures taken, highlighting a potential future harm rather than realized harm.
Thumbnail Image

傳建議府內人員「使用中國DeepSeek」? 彰化縣政府回應了 | 政治 | 三立新聞網 SETN.COM

2025-07-15
三立新聞
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (DeepSeek) that was proposed for use but ultimately not adopted due to security concerns. There is no indication that the AI system was actually used or caused any harm. The event centers on the potential risk and the government's response to prevent possible security issues. Therefore, this is a case of a plausible risk (AI Hazard) that was mitigated before any harm occurred, but since the main focus is on the government's response and policy enforcement rather than an ongoing or realized risk, it fits best as Complementary Information.