EU Bans Chinese AI Chatbot DeepSeek Over Censorship and Data Security Concerns

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

European bodies, including Belgium’s Parliament and Italy, have banned Chinese startup DeepSeek amid findings it enforces Beijing’s censorship and transmits user data to military-linked China Mobile. The EU is investigating potential privacy violations and security risks posed by the low-cost AI chatbot.[AI generated]

Why's our monitor labelling this an incident or hazard?

DeepSeek is an AI system involved in data processing and content filtering aligned with censorship criteria, which implicates potential violations of rights and privacy. The transmission of user data to a military-linked company under sanctions raises significant risks of harm to citizens' rights and security. Although no specific harm is reported as having occurred, the described circumstances plausibly lead to harms such as violations of rights and potential security threats. Therefore, this situation constitutes an AI Hazard, as the AI system's use could plausibly lead to an AI Incident involving harm to rights and security.[AI generated]
AI principles
Privacy & data governanceRespect of human rightsTransparency & explainabilityRobustness & digital securityDemocracy & human autonomyAccountability

Industries
Digital securityMedia, social platforms, and marketingIT infrastructure and hostingGovernment, security, and defenceConsumer services

Affected stakeholders
ConsumersGeneral public

Harm types
Human or fundamental rightsPublic interestReputational

Severity
AI hazard

Business function:
Citizen/customer service

AI system task:
Interaction support/chatbotsContent generation


Articles about this incident or hazard

Thumbnail Image

Question parlementaire | Émergence et contrôle de DeepSeek | E-000712/2025 | Parlement européen

2025-02-27
European Parliament
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system involved in data processing and content filtering aligned with censorship criteria, which implicates potential violations of rights and privacy. The transmission of user data to a military-linked company under sanctions raises significant risks of harm to citizens' rights and security. Although no specific harm is reported as having occurred, the described circumstances plausibly lead to harms such as violations of rights and potential security threats. Therefore, this situation constitutes an AI Hazard, as the AI system's use could plausibly lead to an AI Incident involving harm to rights and security.
Thumbnail Image

L'intelligence artificielle chinoise DeepSeek interdite au Parlement fédéral

2025-03-01
DH.be
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (DeepSeek chatbot) and its prohibition in a government setting. However, there is no indication that the AI system caused any harm or malfunction, nor that it poses a plausible future harm. The event is a regulatory or administrative action taken presumably as a precaution or response to perceived risks, but no direct or indirect harm is described. Therefore, this is best classified as Complementary Information, as it provides context on governance and societal response to AI use.
Thumbnail Image

L'intelligence artificielle chinoise DeepSeek interdite au Parlement fédéral

2025-03-01
7sur7
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI chatbot, so an AI system is involved. The event concerns the use of this AI system and potential misuse or risks related to privacy and data protection. No actual harm or violation has been confirmed or reported as having occurred, only investigations and restrictions due to concerns. This fits the definition of an AI Hazard, as the AI system's use could plausibly lead to violations of data protection laws or privacy harms, but these harms have not yet materialized. The article focuses on the restriction and investigation rather than a realized incident.
Thumbnail Image

Une décision forte au Parlement fédéral : le robot conversationnel DeepSeek interdit !

2025-03-01
Sudinfo.be
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI conversational chatbot similar to ChatGPT, thus qualifying as an AI system. The decision to ban it stems from concerns about privacy and potential violations of data protection laws, which relate to human rights and legal obligations. However, the article does not report any realized harm or incident caused by DeepSeek, only potential risks and ongoing investigations. Therefore, this event represents a precautionary measure reflecting plausible future harm rather than an actual incident. It fits the definition of an AI Hazard because the AI system's use could plausibly lead to violations or harms, but no direct or indirect harm has yet occurred as per the article.
Thumbnail Image

L'intelligence artificielle chinoise DeepSeek interdite au Parlement fédéral

2025-03-01
L'Echo
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system (a chatbot similar to ChatGPT). The article states that its use is banned due to concerns about privacy and ongoing investigations into potential violations of data protection regulations. No actual harm is reported yet, but the potential for violations of fundamental rights (privacy) exists. Therefore, this situation represents an AI Hazard, as the AI system's use could plausibly lead to violations of rights if not properly controlled. The event does not describe realized harm, so it is not an AI Incident. It is more than just complementary information because the ban and investigation indicate a credible risk of harm.