Global Ban on AI Chatbot DeepSeek Over Security and Privacy Fears

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

After Chinese startup DeepSeek’s AI chatbot DeepSeek-R1 became globally popular, the US Department of Defense and Congress, alongside Taiwan, Japan, and European states, banned its use over concerns that the app stores user data on Chinese servers. Security firms warn of exposed logs and potential intelligence gathering by Beijing.[AI generated]

Why's our monitor labelling this an incident or hazard?

DeepSeek is an AI system whose use is being restricted due to fears of data leakage to a foreign government, which could plausibly lead to violations of privacy and security-related harms. The article describes preventive bans and regulatory scrutiny but does not document any actual harm or incident caused by the AI system. Hence, the event is best classified as an AI Hazard, reflecting the credible potential for harm rather than realized harm.[AI generated]
AI principles
Privacy & data governanceRobustness & digital securityTransparency & explainabilityAccountabilityRespect of human rightsDemocracy & human autonomy

Industries
Digital securityGovernment, security, and defenceMedia, social platforms, and marketingConsumer services

Affected stakeholders
ConsumersGovernment

Harm types
Human or fundamental rightsPublic interest

Severity
AI hazard

AI system task:
Interaction support/chatbotsContent generation


Articles about this incident or hazard

Thumbnail Image

DeepSeek疑慮大!美國網安公司:70%客戶禁用 | AI | 芯片 | 網路安全 | 新唐人电视台

2025-02-03
www.ntdtv.com
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system whose use is being restricted due to fears of data leakage to a foreign government, which could plausibly lead to violations of privacy and security-related harms. The article describes preventive bans and regulatory scrutiny but does not document any actual harm or incident caused by the AI system. Hence, the event is best classified as an AI Hazard, reflecting the credible potential for harm rather than realized harm.
Thumbnail Image

美國國防部雇員據悉為使用 DeepSeek 將辦公電腦連上中國伺服器 | 國際焦點 | 國際 | 經濟日報

2025-01-31
Udnemoney聯合理財網
Why's our monitor labelling this an incident or hazard?
The article describes the use of an AI chatbot system by a government employee and the subsequent blocking of access by the Pentagon's IT department. While the AI system is involved, there is no indication that any harm, security breach, or violation has occurred or that there is a plausible risk of harm. The event is primarily about the use and network access control related to an AI system, without evidence of realized or potential harm. Therefore, it does not meet the criteria for an AI Incident or AI Hazard. It is best classified as Complementary Information as it provides context on AI system use and governance actions within a critical institution.
Thumbnail Image

傳美國防部職員為用DeepSeek 將電腦連上中國伺服器

2025-01-31
on.cc東網
Why's our monitor labelling this an incident or hazard?
The event describes the use of an AI system (DeepSeek) by U.S. defense personnel, which led to concerns and actions by the Department of Defense to block access. The involvement of the FBI and White House investigations into export control violations further indicates potential legal and security risks. However, the article does not report any realized harm such as injury, rights violations, or operational disruption. The concerns and investigations indicate plausible future harm related to security and legal compliance. Therefore, this qualifies as an AI Hazard rather than an AI Incident.
Thumbnail Image

美軍及國會警告勿用DeepSeek機械人

2025-01-31
std.stheadline.com
Why's our monitor labelling this an incident or hazard?
The article describes an AI system (DeepSeek-R1 chatbot) whose use has raised significant privacy and security concerns among governments, leading to bans and warnings. The AI system's development and use could plausibly lead to harms such as privacy violations or security breaches, but no actual harm or incident has been reported so far. Therefore, this qualifies as an AI Hazard, as the event involves plausible future harm stemming from the AI system's use and data handling practices, but no direct or indirect harm has yet materialized.
Thumbnail Image

美國安專家:DeepSeek隱私政策一文不值| 台灣大紀元

2025-02-03
大紀元時報 - 台灣(The Epoch Times - Taiwan)
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system (an app likely using AI for search or data processing). The article details that sensitive data, including chat logs and API secrets, were unintentionally exposed online, constituting a direct harm to user privacy (a violation of rights). Additionally, the app's data is exploited by foreign actors for intelligence gathering and societal division, indicating harm to communities and national security. These harms have already occurred or are ongoing, meeting the criteria for an AI Incident. The article does not merely warn of potential harm but reports actual data exposure and misuse.
Thumbnail Image

彭博:美國防人員為用DeepSeek 辦公電腦連中國伺服器 | 聯合新聞網

2025-02-01
UDN
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (DeepSeek AI chatbot) used by U.S. defense employees, which connects to Chinese servers, raising security and privacy concerns. The U.S. Department of Defense's blocking of access and the Navy's ban reflect recognition of potential risks. However, the article does not report any actual harm occurring from the AI system's use, such as data breaches, operational failures, or rights violations. The concerns are about plausible future harm related to data security and misuse, which fits the definition of an AI Hazard. The event is not merely general AI news or a response update, so it is not Complementary Information. Hence, the classification is AI Hazard.