Italy Blocks Chinese AI Chatbot DeepSeek Over Privacy Breach Fears

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

The Italian Data Protection Authority has urgently removed Chinese AI chatbot DeepSeek from app stores and launched an investigation after its developers failed to clarify how user data is collected, stored, and used. Researchers found a security flaw that exposed sensitive personal information, raising serious privacy breach concerns.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves an AI system (DeepSeek, a chatbot AI) and concerns its development and use, specifically regarding data collection and training data. The Italian authority's request is motivated by the plausible risk of harm to personal data privacy for millions, which could constitute violations of rights if realized. Since no actual harm or incident has occurred yet, but there is a credible potential risk, this qualifies as an AI Hazard rather than an AI Incident. The article also mentions ongoing investigations and security concerns but does not report realized harm.[AI generated]
AI principles
Privacy & data governanceRobustness & digital securityTransparency & explainabilityAccountabilityRespect of human rights

Industries
Consumer servicesDigital securityIT infrastructure and hosting

Affected stakeholders
Consumers

Harm types
Human or fundamental rightsPsychologicalReputational

Severity
AI hazard

Business function:
Citizen/customer service

AI system task:
Interaction support/chatbotsContent generation


Articles about this incident or hazard

Thumbnail Image

Italia pide información a 'DeepSeek' por su "posible riesgo para millones de personas"

2025-01-29
Diario1
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek, a chatbot AI) and concerns its development and use, specifically regarding data collection and training data. The Italian authority's request is motivated by the plausible risk of harm to personal data privacy for millions, which could constitute violations of rights if realized. Since no actual harm or incident has occurred yet, but there is a credible potential risk, this qualifies as an AI Hazard rather than an AI Incident. The article also mentions ongoing investigations and security concerns but does not report realized harm.
Thumbnail Image

DeepSeek, en el punto de mira de la UE: estos dos países quieren saber cómo recopila información de los usuarios

2025-01-30
20 minutos
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system (a generative AI assistant) that collects and processes user data. The event involves regulatory inquiries into its data handling practices, which could plausibly lead to violations of data protection laws and user privacy rights if non-compliance is found. However, the article does not report any realized harm or incident, only ongoing investigation and potential future sanctions. Therefore, this situation constitutes an AI Hazard, as the development and use of the AI system could plausibly lead to an AI Incident (violation of rights under applicable law) if issues are confirmed.
Thumbnail Image

Italia bloquea la 'app' china de inteligencia artificial DeepSeek por falta de información

2025-01-30
20 minutos
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (DeepSeek) and regulatory intervention due to lack of transparency about data collection, usage, and storage, which implicates potential violations of data protection and privacy rights (a form of human rights). No actual harm or incident is reported; instead, the blocking is a preventive measure based on insufficient information, indicating a credible risk of harm. Hence, this fits the definition of an AI Hazard, as the AI system's use could plausibly lead to violations or harm if not properly regulated or understood. It is not Complementary Information because the main focus is on the regulatory blocking and investigation, not on updates or responses to a past incident. It is not Unrelated because the AI system and its data practices are central to the event.
Thumbnail Image

Italia bloquea la aplicación china 'DeepSeek' por falta de información

2025-01-30
Gestión
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (DeepSeek chatbot) whose development and use are under scrutiny by Italian authorities for lack of transparency about data collection and training data. The authorities have blocked the app to protect user data and have opened an investigation, indicating concern about potential violations of data protection laws and user privacy. No direct harm has been reported yet, but the potential for harm through misuse or unlawful data processing is credible. Hence, this is an AI Hazard rather than an AI Incident. It is not Complementary Information because the main focus is on the regulatory action and potential risk, not on updates or responses to a past incident. It is not Unrelated because the event clearly involves an AI system and potential harm.
Thumbnail Image

Italia bloquea la aplicación china 'DeepSeek' por falta de información

2025-01-30
Diario1
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (DeepSeek chatbot) whose use and data practices are under scrutiny by Italian authorities. The authorities have not reported any realized harm but have taken urgent action to block the app due to insufficient transparency and potential non-compliance with data protection laws, which protect fundamental rights. This regulatory intervention and investigation represent a governance response to potential AI-related risks rather than an AI Incident or Hazard. There is no indication that harm has occurred or that the AI system's use has directly or indirectly led to harm. The focus is on the regulatory and investigative measures, making this Complementary Information.
Thumbnail Image

Italia prohíbe DeepSeek, el Congreso de EE.UU no permite usar la IA china a sus trabajadores... ¿por qué?

2025-01-31
Antena3
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (DeepSeek) and details how its use and data management practices have led to the exposure of sensitive personal data, which is a violation of privacy rights and a harm to users. The blocking by Italian authorities and the U.S. Congress's restrictions are responses to these harms. The data breach identified by Wiz Research confirms realized harm rather than just potential risk. Hence, this qualifies as an AI Incident due to direct harm caused by the AI system's use and malfunction in data security and privacy protection.
Thumbnail Image

Italia bloquea inteligencia artificial china "DeepSeek"

2025-01-30
Caracol Radio
Why's our monitor labelling this an incident or hazard?
The article involves an AI system ('DeepSeek') whose operation is blocked by Italian authorities due to lack of transparency about personal data handling, which could plausibly lead to violations of data protection laws and user privacy harms. Since no actual harm is reported, but the blocking is a precaution to prevent potential harm, this constitutes an AI Hazard. The event is not a realized incident but a regulatory response to a credible risk associated with the AI system's deployment.
Thumbnail Image

DeepSeek: Italia bloqueó la aplicación de inteligencia artificial china - Diario Panorama

2025-01-31
Diario Panorama
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek) whose use is being restricted by a regulatory authority due to concerns about data privacy and transparency. Although no direct harm has been reported yet, the lack of clarity about data handling and potential unauthorized use of personal data (e.g., via web scraping) poses a credible risk of violations of data protection rights and privacy, which are human rights. Therefore, this situation constitutes an AI Hazard because the AI system's use could plausibly lead to harm, but no realized harm is described in the article.
Thumbnail Image

Bloqueada DeepSeek en Italia mientras los países europeos investigan sobre la privacidad de los datos de usuarios

2025-01-29
Granada Hoy
Why's our monitor labelling this an incident or hazard?
The article details ongoing investigations and regulatory scrutiny regarding the AI system's data handling and privacy compliance, which could plausibly lead to violations of data protection laws and harm to users' privacy rights. Since no confirmed harm or breach has occurred yet, but there is a credible risk and regulatory action to prevent potential harm, this qualifies as an AI Hazard rather than an AI Incident. The focus is on potential future harm and compliance verification rather than reporting an actual incident of harm or rights violation.
Thumbnail Image

Italia bloquea la aplicación china 'DeepSeek' por falta de información

2025-01-30
Yahoo Finance
Why's our monitor labelling this an incident or hazard?
An AI system ('DeepSeek') is explicitly mentioned, and the issue concerns its data usage and transparency. Although no direct harm has been reported, the lack of information and the regulatory blocking indicate a plausible risk of harm, such as violations of data protection rights or privacy breaches. Since no actual harm has been reported yet, but there is a credible risk that the AI system's use could lead to harm, this qualifies as an AI Hazard rather than an Incident. The event is primarily about regulatory action and investigation, not about realized harm or incident.
Thumbnail Image

Italia entra en la guerra de la IA - Ciencia y tecnologia - Ansa.it

2025-01-30
Agenzia ANSA
Why's our monitor labelling this an incident or hazard?
DeepSeek is explicitly described as an AI system designed to process human conversations. The event involves the use and malfunction (security vulnerability) of this AI system, which exposed sensitive user data, constituting harm to users' privacy and a violation of data protection rights. The Italian authority's intervention and app removal confirm the harm has materialized. Hence, this is an AI Incident as the AI system's use and malfunction directly led to harm.