South Korea Suspends DeepSeek AI Downloads Over Privacy Concerns

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

South Korea’s Personal Information Protection Commission has halted new downloads of Chinese AI app DeepSeek, following bans on internal use by several ministries due to risky data-collection practices. The government requires compliance with local privacy laws and app improvements before resumption. Existing users retain access while DeepSeek appoints a local representative to remedy shortcomings.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly involves an AI system, DeepSeek, whose use has been suspended or banned by several countries due to concerns about data privacy and potential misuse of collected data. The harms described are potential violations of personal data protection laws and risks to national security, which could plausibly lead to AI Incidents if the data were misused or leaked. Since the article focuses on the potential risks and preventive bans rather than actual realized harm, this situation fits the definition of an AI Hazard rather than an AI Incident. The AI system's development and use are central to the concerns, and the plausible future harm justifies classification as an AI Hazard.[AI generated]
AI principles
Privacy & data governanceRespect of human rightsAccountabilityTransparency & explainabilityRobustness & digital security

Industries
Government, security, and defenceDigital securityIT infrastructure and hosting

Affected stakeholders
Consumers

Harm types
Human or fundamental rightsReputationalEconomic/Property

Severity
AI hazard

AI system task:
Interaction support/chatbots


Articles about this incident or hazard

Thumbnail Image

Corea del Sur es el último: Los países de todo el mundo que han bloqueado a la IA china DeepSeek

2025-02-17
El Mercurio de Santiago
Why's our monitor labelling this an incident or hazard?
The article focuses on the blocking or suspension of an AI system's functions by various countries due to legal discrepancies, particularly regarding personal data protection. There is no mention of actual harm caused by the AI system, nor is there a direct or indirect link to injury, rights violations, or other harms. Instead, the event reflects governance and regulatory responses to potential risks or non-compliance, which fits the definition of Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

Daniel Low, experto argentino en Inteligencia Artificial en Harvard: "En la carrera por la IA hay una cuestión geopolítica además de económica

2025-02-15
La Nacion
Why's our monitor labelling this an incident or hazard?
The article primarily provides expert commentary on the AI landscape, focusing on the competitive and geopolitical aspects of AI development, particularly the emergence of DeepSeek. It highlights potential future risks and concerns but does not report any actual harm or incident caused by AI systems. Therefore, it fits the definition of Complementary Information, as it enhances understanding of the AI ecosystem and its implications without describing a specific AI Incident or AI Hazard.
Thumbnail Image

Qué países restringieron o bloquearon la aplicación de IA china DeepSeek

2025-02-17
LaPatilla.com
Why's our monitor labelling this an incident or hazard?
The article describes the use of an AI system, DeepSeek, which has been restricted or blocked by multiple countries due to concerns about data privacy violations, potential espionage, and the distribution of malicious software. These restrictions are a direct response to the risks and harms associated with the AI system's use, including violations of data protection laws and cybersecurity threats. Since these actions are in response to realized or ongoing harms linked to the AI system's use, this qualifies as an AI Incident under the framework, as the AI system's use has directly or indirectly led to violations of legal protections and security risks.
Thumbnail Image

DeepSeek: países donde se ha bloqueado el uso de la IA china

2025-02-17
El Universal
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system, DeepSeek, whose use has been suspended or banned by several countries due to concerns about data privacy and potential misuse of collected data. The harms described are potential violations of personal data protection laws and risks to national security, which could plausibly lead to AI Incidents if the data were misused or leaked. Since the article focuses on the potential risks and preventive bans rather than actual realized harm, this situation fits the definition of an AI Hazard rather than an AI Incident. The AI system's development and use are central to the concerns, and the plausible future harm justifies classification as an AI Hazard.
Thumbnail Image

DeepSeek se está expandiendo silenciosamente por la burocracia china. Y nos está dando una lección sobre el futuro de la IA

2025-02-18
Xataka
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (DeepSeek) in public administration and enterprises, which fits the definition of an AI system. However, the article does not report any realized harm or violations caused by the AI system, nor does it highlight any credible risk of harm. Instead, it focuses on the rapid adoption and efficiency gains, contrasting governance approaches. Therefore, this is best classified as Complementary Information, as it provides important context and updates about AI deployment and governance without describing an AI Incident or AI Hazard.
Thumbnail Image

DeepSeek es prohibida en Corea del Sur; Qué otros países la bloquearon

2025-02-17
Milenio.com
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system (an AI application) whose use is being restricted by various governments due to concerns about data privacy and security. The article does not report any realized harm or incident caused by DeepSeek but highlights the plausible risks of data leakage, espionage, and violation of personal data protection laws. These concerns and government actions represent a credible potential for harm, making this an AI Hazard rather than an AI Incident. The article primarily reports on the regulatory and governmental responses to these risks, which aligns with the definition of an AI Hazard because the harms are plausible but not yet realized.
Thumbnail Image

Este país ha prohibido el uso de DeepSeek hasta que haya "mejoras" y este es el motivo

2025-02-17
El Confidencial
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system involved in data processing and collection. The suspension by South Korean authorities is due to concerns about potential privacy violations and data leaks, which could lead to harm to individuals' rights. Since the article does not report actual harm but focuses on the risk and regulatory action to prevent it, this fits the definition of an AI Hazard. The event is not an AI Incident because no realized harm is described, nor is it Complementary Information or Unrelated.
Thumbnail Image

Más países bloquean a DeepSeek: Corea del Sur prohíbe al chatbot chino por violaciones a la privacidad

2025-02-17
Todo Noticias
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions that DeepSeek's AI chatbot has violated privacy regulations by improperly handling user data and transferring it to third parties without proper consent, leading to governmental bans and restrictions. This constitutes a violation of human rights and legal obligations related to data privacy. The involvement of the AI system in causing these harms is direct, as the AI service's data practices are the cause of the regulatory actions. Therefore, this event meets the criteria for an AI Incident due to realized harm from the AI system's use.
Thumbnail Image

China busca definir el futuro tecnológico de la IA con una cumbre estratégica liderada por su nuevo buque insignia

2025-02-17
3D Juegos
Why's our monitor labelling this an incident or hazard?
While the article discusses the development, use, and strategic importance of an AI system (DeepSeek) and its economic impact, it does not describe any specific harm or incident caused by the AI system. There is no mention of injury, rights violations, disruption, or other harms directly or indirectly caused by DeepSeek. The focus is on the strategic and economic implications and future potential, not on realized or imminent harm. Therefore, this is best classified as Complementary Information providing context on AI ecosystem developments and governance responses.
Thumbnail Image

Corea del Sur suspendió temporalmente DeepSeek por políticas de datos | Mundo | La Voz del Interior

2025-02-17
La Voz
Why's our monitor labelling this an incident or hazard?
An AI system (DeepSeek) is explicitly mentioned, and its use involves data collection policies that raise concerns about personal data leakage, which is a violation of data protection laws and personal rights. The authorities have suspended the app temporarily to mitigate these risks. Since no actual harm or data breach is reported, but there is a credible risk of harm to users' personal data, this event fits the definition of an AI Hazard, where the AI system's use could plausibly lead to an AI Incident if not addressed.
Thumbnail Image

Detienen las descargas de DeepSeek en Corea del Sur por preocupaciones de privacidad | Agencias | La Voz del Interior

2025-02-17
La Voz
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek chatbot) whose use has raised privacy concerns that could plausibly lead to violations of personal data protection laws and user privacy rights. The app has been removed from app stores and usage restricted as a precautionary measure. Since no actual harm or violation has been confirmed or reported, but there is a credible risk of privacy harm, this qualifies as an AI Hazard rather than an AI Incident. The article focuses on the potential for harm and regulatory response rather than a realized incident of harm.
Thumbnail Image

China pide no "politizar" la tecnología tras bloqueo de DeepSeek en Corea del Sur

2025-02-17
Gestión
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system (a chatbot) whose data handling practices are under investigation for compliance with privacy laws. The removal from app stores and regulatory scrutiny indicate concerns about potential violations of user privacy, which is a human rights issue. Since no actual harm or confirmed violation has been reported yet, but there is a credible risk that the AI system's use could lead to such harm, this event fits the definition of an AI Hazard. The article focuses on regulatory and political responses rather than reporting an incident of harm, so it is not an AI Incident or Complementary Information. It is not unrelated because it clearly involves an AI system and potential harm.
Thumbnail Image

Qué países restringieron o bloquearon la aplicación de IA china DeepSeek

2025-02-17
eju.tv
Why's our monitor labelling this an incident or hazard?
The article describes multiple governments restricting or banning the use of the AI system DeepSeek due to concerns about data privacy violations, potential espionage, and security risks. These concerns relate to the AI system's use and data management, which could plausibly lead to harms such as violations of personal data protection laws and national security breaches. Since the article does not report actual realized harm but focuses on the credible risk and preventive actions taken, this situation fits the definition of an AI Hazard rather than an AI Incident. The AI system's involvement is clear, and the plausible future harm is well articulated by the governments' responses.
Thumbnail Image

Cuatro continentes y corea del sur fortalecen el bloqueo a la china DeepSeek

2025-02-17
Noticias SIN
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (DeepSeek) whose use has led to multiple countries taking restrictive actions due to concerns about data privacy violations, potential espionage, and security risks. These concerns relate to violations of personal data protection laws and potential breaches of rights, which are recognized harms under the AI Incident definition. The blocking and prohibitions by governments indicate that the AI system's use has directly or indirectly led to these harms or legal breaches. Although physical harm is not reported, the violations of data protection and national security constitute significant harms. Hence, the event is best classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

China exige a sus empresas cumplir con la ley local tras veto a DeepSeek en Corea del Sur - Banca y Negocios

2025-02-17
Banca y Negocios
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system (an AI application) whose use has been suspended due to concerns about data privacy and security risks. The suspension and restrictions are responses to potential harms related to personal data protection and national security. However, the article does not report any realized harm caused by DeepSeek's use, only regulatory actions and warnings. Therefore, this event represents a plausible risk of harm from the AI system's use, qualifying it as an AI Hazard rather than an AI Incident. It is not merely complementary information because the suspension and warnings indicate a credible potential for harm, but no direct harm has been reported yet.
Thumbnail Image

Cuatro formas en que DeepSeek podría cambiarlo todo

2025-02-16
Forbes México
Why's our monitor labelling this an incident or hazard?
The article centers on the economic and geopolitical implications of DeepSeek's launch and its potential to accelerate AI adoption and competition. It does not describe any specific AI Incident (harm caused) or AI Hazard (plausible future harm) related to DeepSeek or its AI system. Instead, it provides contextual and strategic information about AI market trends, competition, and policy considerations, which fits the definition of Complementary Information. There is no direct or indirect harm described, nor a credible risk of harm detailed in the article.
Thumbnail Image

Ofensiva de Corea del Sur: retira DeepSeek de las tiendas de aplicaciones - MDZ Online

2025-02-17
mdz
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI-related application that has raised concerns about personal data leakage, which constitutes a violation of personal data protection rights under applicable law. The suspension and removal are responses to these risks. Since the article indicates that the app's use poses risks of harm to personal data privacy (a form of violation of rights), and the authorities have acted to prevent further harm, this situation qualifies as an AI Incident due to realized or ongoing harm related to personal data protection violations caused by the AI system's use.
Thumbnail Image

corea del sur retira deepseek de tiendas de aplicaciones

2025-02-17
eju.tv
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system (a chatbot and language model) whose use has raised concerns about data privacy and security, leading to its suspension and restrictions by South Korean authorities. The event does not report actual harm caused by the AI system but highlights credible risks of data leaks and national security threats. The regulatory actions and warnings indicate a plausible risk of harm if the AI system continued operation without compliance. Hence, this fits the definition of an AI Hazard, as the AI system's use could plausibly lead to harm, but no direct harm has yet occurred or been reported.
Thumbnail Image

¿Podría DeepSeek ser prohibido en Estados Unidos? - El Diario - Bolivia

2025-02-17
www.eldiario.net
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system, and the article focuses on the possibility that its use could be restricted or banned in the US due to security, privacy, and geopolitical concerns. No actual harm or incident has occurred yet, but the potential for future harm (e.g., privacy violations, espionage risks) is credible and plausible. Therefore, this situation fits the definition of an AI Hazard, as it describes circumstances where the AI system's use could plausibly lead to harm or legal restrictions, but no direct harm has been realized yet.
Thumbnail Image

Corea del Sur Prohíbe Descargas de DeepSeek, la Aplicación de Inteligencia Artificial China.

2025-02-18
Periódico HOY
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (DeepSeek chatbot) and regulatory concerns about data privacy and security, which relate to potential violations of personal information protection laws. However, there is no indication that any harm has already occurred; the government has suspended downloads to prevent possible harm and ensure compliance. This fits the definition of an AI Hazard, as the AI system's use could plausibly lead to an AI Incident (e.g., data privacy violations) if not properly managed. The event is not an AI Incident because no realized harm is reported, nor is it Complementary Information or Unrelated.
Thumbnail Image

Corea del Sur retira DeepSeek de tiendas de aplicaciones - Confirmado

2025-02-17
Confirmado.net
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system (a chatbot and language model) whose use has led to concerns about data privacy violations and potential leaks of sensitive information, which are harms to rights and security. The South Korean government and other institutions have suspended and blocked its use, indicating that harm or risk is materializing. The involvement of the AI system in causing these harms is direct, as the data collection and storage practices are integral to the AI's operation. Hence, this event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Seúl bloquea la inteligencia artificial DeepSeek de China - Ciencia y tecnologia - Ansa.it

2025-02-17
Agenzia ANSA
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (DeepSeek) whose use has raised serious concerns about data privacy and potential violations of personal information protection laws in South Korea. The government has suspended the service pending improvements, indicating that harm has not yet occurred but could plausibly occur if the system continued operating without changes. The concerns relate to privacy rights, which fall under violations of human rights as per the framework. Since the harm is potential and preventive actions are being taken, this event fits the definition of an AI Hazard rather than an AI Incident. It is not Complementary Information because the main focus is on the suspension due to privacy concerns, not on responses to a past incident or broader ecosystem updates. It is not Unrelated because the AI system and its risks are central to the event.
Thumbnail Image

كوريا الجنوبية تُعلّق خدمات تطبيق "ديب سيك"

2025-02-17
ASSABAHNEWS
Why's our monitor labelling this an incident or hazard?
The event involves an AI system ('DeepSeek') whose service has been suspended due to concerns about data privacy and management practices. However, there is no indication that any harm has yet occurred; rather, the suspension is a preventive measure to ensure compliance with data protection laws. Therefore, this situation represents a plausible risk of harm related to AI use, making it an AI Hazard rather than an Incident. It is not merely complementary information because the suspension is a direct regulatory action in response to potential harm.
Thumbnail Image

اخبارك نت | لمخاوف أمنية.. كوريا الجنوبية تعلّق خدمات تطبيق "ديب سيك"

2025-02-17
موقع أخبارك للأخبار المصرية
Why's our monitor labelling this an incident or hazard?
An AI system ('DeepSeek') is explicitly mentioned, and the issue arises from its use and data management practices. Although no direct harm such as injury or rights violations is reported as having occurred, the suspension is due to concerns about possible violations of personal data protection laws, which relate to human rights and legal obligations. Since the harm is not confirmed but there is a credible risk of legal and rights violations, this event constitutes an AI Hazard rather than an AI Incident. The focus is on preventing potential harm through regulatory intervention, not on reporting an actual harm event.
Thumbnail Image

كوريا الجنوبية تحظر تطبيق ديب سيك

2025-02-17
Babnet Tunisie
Why's our monitor labelling this an incident or hazard?
The event involves an AI system ('DeepSeek') whose use has raised legal concerns regarding personal data protection. However, there is no indication that harm has occurred yet; rather, the government is acting preemptively to prevent potential violations of privacy rights. This constitutes a plausible risk of harm related to AI use, making it an AI Hazard rather than an Incident. The focus is on preventing possible future harm through regulatory action.
Thumbnail Image

DeepSeek يثير الجدل بمزاعم علاقته بتطبيق بايت دانس - اليوم السابع

2025-02-19
اليوم السابع
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system (a chatbot) involved in the collection and transmission of personal data. The unauthorized transfer of personal data to a third party without explicit user consent is a violation of legal obligations protecting fundamental rights, specifically data privacy rights. This breach has already occurred and led to regulatory intervention, indicating realized harm. Therefore, this qualifies as an AI Incident under the category of violations of human rights or breach of applicable law protecting fundamental rights.
Thumbnail Image

كوريا الجنوبية تحظر تطبيق "ديب سيك"... - الوكيل الإخباري

2025-02-17
الوكيل الاخباري
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system ('DeepSeek') whose service is suspended due to concerns about data collection and management practices potentially violating personal data protection laws. This indicates involvement of an AI system and regulatory action due to potential legal and privacy harms. Since no actual harm or incident is reported, but the suspension is a preventive measure addressing plausible risks, this fits the definition of an AI Hazard rather than an AI Incident. The event is not merely complementary information because it reports a concrete regulatory action based on potential harm, nor is it unrelated as it clearly involves an AI system and data privacy concerns.
Thumbnail Image

كوريا الجنوبية تحظر تطبيق "DeepSeek" الصيني- فيديو

2025-02-17
رؤيا الأخباري
Why's our monitor labelling this an incident or hazard?
The event involves an AI-related application (likely involving AI for data processing or search functionalities) whose operation was suspended due to non-compliance with data protection laws. However, there is no indication that the app's use or malfunction has directly or indirectly caused harm such as injury, rights violations, or other significant harms. The focus is on regulatory compliance and planned improvements, not on realized or imminent harm. Therefore, this is best classified as Complementary Information, as it provides context on governance and regulatory response to AI system use rather than reporting an AI Incident or Hazard.
Thumbnail Image

كوريا الجنوبية تعلق تطبيق ديب سيك DeepSeek‏.. ما القصة؟ | المصري اليوم

2025-02-17
AL Masry Al Youm
Why's our monitor labelling this an incident or hazard?
An AI system (DeepSeek) is explicitly involved, and the event concerns its use and compliance with data protection laws. However, the article does not report any realized harm such as injury, rights violations, or other damages caused by the AI system. Instead, it focuses on regulatory measures to prevent potential harm related to data privacy. Therefore, this event is best classified as Complementary Information, as it provides an update on governance and regulatory response to AI use, rather than reporting an AI Incident or AI Hazard.
Thumbnail Image

لمخاوف أمنية.. كوريا الجنوبية تعلّق خدمات تطبيق "ديب سيك"

2025-02-17
سكاي نيوز عربية
Why's our monitor labelling this an incident or hazard?
The article involves an AI system ('DeepSeek') whose use raised concerns about data privacy and legal compliance. The suspension and investigation indicate a plausible risk of violation of personal data protection laws, which are part of fundamental rights. Since no realized harm or incident is reported, but there is a credible risk that the AI system's use could lead to legal and rights violations, this qualifies as an AI Hazard rather than an AI Incident. The event is not merely complementary information because it centers on the suspension due to potential harm, nor is it unrelated as it directly involves an AI system and data protection concerns.
Thumbnail Image

حظر تطبيق "ديب سيك" في كوريا الجنوبية

2025-02-17
شبكة الميادين
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (the DeepSeek AI application) and describes government intervention due to concerns about data collection and privacy compliance. No actual harm or rights violations have been reported yet, but the suspension and investigation indicate a credible risk of harm to personal data privacy and rights. The event is about preventing potential harm rather than reporting an incident or providing follow-up information on a past incident. Hence, it fits the definition of an AI Hazard, where the AI system's use could plausibly lead to a breach of obligations under applicable law protecting fundamental rights if not properly managed.
Thumbnail Image

كوريا الجنوبية تعلّق خدمات تطبيق "ديب سيك" - الوئام

2025-02-17
صحيفة الوئام الالكترونية
Why's our monitor labelling this an incident or hazard?
An AI system ('DeepSeek') is explicitly mentioned, and the issue arises from its use and data management practices. The suspension is due to concerns about violations of personal data protection laws, which relate to fundamental rights. Although no direct harm is reported as having occurred, the regulatory intervention indicates a plausible risk of harm to individuals' data privacy if the service continued without improvements. Therefore, this event represents an AI Hazard, as the AI system's use could plausibly lead to violations of rights if unaddressed.
Thumbnail Image

كوريا الجنوبية توقف مؤقتا تطبيق (ديب سيك) الصيني بسبب مخاوف تتعلق بالخصوصية

2025-02-17
وكالة الأنباء الكويتية - كونا
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (DeepSeek) and concerns about its data collection and privacy practices, which could lead to violations of personal information rights, a form of harm under the framework. Since the app's download has been suspended pending improvements to comply with local privacy laws, and users are cautioned about potential risks, this indicates a credible risk of harm but no confirmed incident of harm has occurred. Hence, this qualifies as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

كوريا الجنوبية تعلق الخدمة المحلية لبرنامج الذكاء الاصطناعي الصيني

2025-02-17
وكالة أنباء البحرين
Why's our monitor labelling this an incident or hazard?
The AI system DeepSeek is involved, and the suspension is due to concerns about its data collection and management practices potentially violating personal data protection laws, which protect fundamental rights. Although no actual harm has been reported, the regulatory suspension reflects a credible risk that the AI system's use could lead to violations of rights if unaddressed. Hence, this is an AI Hazard, as the event concerns plausible future harm prevented by intervention rather than an incident where harm has already occurred.
Thumbnail Image

هذه الدولة تحظر تطبيق ''ديب سيك''

2025-02-17
تورس
Why's our monitor labelling this an incident or hazard?
The AI system 'DeepSeek' is explicitly mentioned and is involved in data collection practices that raised legal and privacy concerns. The government's suspension of the service and the requirement for improvements to comply with data protection laws indicate that the AI system's use could lead to violations of fundamental rights if unregulated. Since the article does not report actual harm but focuses on preventing potential violations, this event is best classified as an AI Hazard, reflecting plausible future harm related to data privacy violations.
Thumbnail Image

كوريا الجنوبية تعلق خدمة تطبيق الذكاء الاصطناعي "ديب سيك" بسبب مخاوف جمع البيانات

2025-02-17
جـــريــدة الفجــــــر المصــرية
Why's our monitor labelling this an incident or hazard?
The article focuses on the regulatory response to the AI application's data collection practices, highlighting concerns about privacy and legal compliance. There is no report of actual harm occurring yet, but the government's suspension and demand for improvements indicate a governance response to potential risks. The AI system's involvement is clear, but the event centers on mitigation and oversight rather than an incident or hazard per se. Hence, it fits the definition of Complementary Information, providing context on societal and governance responses to AI-related privacy concerns.
Thumbnail Image

الحكومة الكورية الجنوبية اعلنت تعليق الخدمة المحلية لتطبيق "ديب سيك"

2025-02-17
Addiyar
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system ('DeepSeek') and concerns about its data collection practices potentially violating privacy laws. However, there is no indication that actual harm has occurred yet; rather, the government has taken preventive action by suspending the service until compliance is ensured. This represents a plausible risk of harm related to personal data privacy, but no direct harm is reported. Therefore, this qualifies as an AI Hazard, as the AI system's use could plausibly lead to violations of personal data protection laws if unaddressed.
Thumbnail Image

كوريا الجنوبية تعلّق خدمات تطبيق "ديب سيك" الصيني | شبكة الإعلام العراقي

2025-02-17
شبكة الاعلام العراقي
Why's our monitor labelling this an incident or hazard?
The event involves an AI system ('DeepSeek') and concerns about its data collection and management practices, which relate to privacy and legal compliance. However, there is no indication that any harm has occurred yet, only potential risks related to data privacy. Therefore, this situation represents a plausible risk of harm due to AI system use but no realized harm is reported. This fits the definition of an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

(جديد) الحكومة تعلق التنزيلات الجديدة لبرنامج "ديب سيك" بسبب المخاوف المتعلقة بالخصوصية | وكالة يونهاب للانباء

2025-02-17
وكالة يونهاب للأنباء
Why's our monitor labelling this an incident or hazard?
The AI system 'DeepSeek' is explicitly mentioned, and the event revolves around concerns about its data collection and privacy practices. Although no direct harm has been reported, the government's suspension and investigation indicate a credible risk that the AI system could lead to violations of personal data privacy, which constitutes a potential violation of rights under applicable law. Since the harm is plausible but not yet realized, this event fits the definition of an AI Hazard rather than an AI Incident. The event is not merely complementary information because it centers on the suspension due to privacy concerns, which implies a credible risk of harm.
Thumbnail Image

كوريا الجنوبية توقف "ديب سيك" بفعل مخاوف جدية

2025-02-17
elsiyasa.com
Why's our monitor labelling this an incident or hazard?
An AI system ('Deep Seek') is explicitly involved, and its use has led to regulatory action due to data privacy concerns, which relate to violations of applicable laws protecting personal information (a form of human rights and legal obligations). Although no direct harm such as injury or property damage is reported, the suspension due to non-compliance with data protection laws indicates a breach of obligations intended to protect fundamental rights. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to a legal violation and regulatory harm response.
Thumbnail Image

تعليق تحميل برنامج "ديب سيك" في كوريا الجنوبية.. ما السبب؟

2025-02-18
مصراوي.كوم
Why's our monitor labelling this an incident or hazard?
An AI system ('DeepSeek') is explicitly mentioned, and its use has raised concerns about violations of personal data protection laws, which relate to human rights and legal obligations. Although no direct harm is reported as having occurred, the suspension and regulatory actions indicate a credible risk of harm to personal data privacy. This situation fits the definition of an AI Hazard, as the AI system's use could plausibly lead to violations of rights if unaddressed. There is no indication that harm has already materialized, so it is not an AI Incident. The article focuses on regulatory response and risk mitigation, not on a past incident or complementary information about a previous event.
Thumbnail Image

كوريا الجنوبية تعلق تحميل برنامج "ديب سيك" بسبب المخاوف المتعلقة بالخصوصية

2025-02-18
IRAK HABER AJANSI
Why's our monitor labelling this an incident or hazard?
An AI system ('DeepSeek') is explicitly involved, and its use has led to concerns about violations of personal data privacy, which falls under violations of rights protected by law. Although no direct harm is reported, the suspension and warnings indicate that the AI system's use has already caused or is causing a breach of obligations under applicable law. Therefore, this qualifies as an AI Incident due to the realized violation of data protection laws and privacy rights.
Thumbnail Image

شركة "ديب سيك" ترسل بيانات المستخدمين في كوريا الجنوبية إلى شركة "بايت دانس" الصينية | وكالة يونهاب للانباء

2025-02-18
وكالة يونهاب للأنباء
Why's our monitor labelling this an incident or hazard?
An AI system (DeepSeek chatbot) is explicitly involved, and its use has led to the unauthorized transfer of personal user data to a third party, which constitutes a violation of data protection laws and users' privacy rights. This is a breach of obligations under applicable law protecting fundamental rights, specifically privacy and data protection. Although the full extent of data transferred is not yet confirmed, the confirmed unauthorized data sharing and regulatory action indicate realized harm. Therefore, this qualifies as an AI Incident due to violation of legal rights and potential harm to users' privacy.
Thumbnail Image

الحكومة تعلق الخدمة المحلية لبرنامج الذكاء الاصطناعي الصيني "ديب سيك"‏ | وكالة يونهاب للانباء

2025-02-17
وكالة يونهاب للأنباء
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (DeepSeek) whose service was suspended due to concerns about data collection and management practices potentially violating personal information protection laws. Although no direct harm has been reported, the suspension indicates a credible risk that the AI system's operation could lead to violations of privacy rights, which falls under potential harm. Therefore, this event is best classified as an AI Hazard, as it plausibly could lead to an AI Incident if the issues are not addressed.
Thumbnail Image

Coreia do Sul suspende novos downloads do DeepSeek, diz agência de proteção de dados

2025-02-17
uol.com.br
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek, an AI application) whose use has raised data privacy concerns leading to regulatory action. However, there is no indication that harm has occurred yet; rather, the suspension is a preventive regulatory measure to ensure compliance with data protection laws. This fits the category of Complementary Information as it provides an update on governance and regulatory response related to AI use, without describing a realized AI Incident or a plausible AI Hazard.
Thumbnail Image

DeepSeek levanta suspeitas em Portugal. CNPD avalia a atividade da startup chinesa - SAPO Tek

2025-02-17
SAPO Tek
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek's AI model) and concerns about its data privacy practices, which could lead to violations of fundamental rights (privacy). However, the article only reports investigations, complaints, and regulatory scrutiny without confirmed or realized harm. Therefore, it does not meet the criteria for an AI Incident (no direct or indirect harm has occurred yet). It also does not describe a specific plausible future harm event but rather ongoing regulatory evaluation and responses, which fits the definition of Complementary Information. The article provides important context on governance and societal responses to potential AI-related privacy issues but does not itself describe a new AI Incident or AI Hazard.
Thumbnail Image

Coreia do Sul suspende downloads do DeepSeek por questões de privacidade * Tecnoblog

2025-02-17
Tecnoblog
Why's our monitor labelling this an incident or hazard?
The DeepSeek app is an AI system (chatbot) that processes user data. The South Korean data protection authority suspended new downloads because the app does not comply with local privacy laws, indicating a risk of privacy violations. Although the app remains accessible via browser and existing installations, the government advises caution, highlighting potential harm. No direct or indirect harm has been reported yet, only regulatory preventive action. This fits the definition of an AI Hazard, where the AI system's use could plausibly lead to harm (privacy violations), but no incident has occurred.
Thumbnail Image

Coreia do Sul suspende downloads da IA DeepSeek por motivos de segurança

2025-02-17
TecMundo
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system (a chatbot) whose use has directly led to harms including data exposure and security vulnerabilities, which are violations of privacy and potentially national security concerns. These constitute realized harms under the framework, making this an AI Incident. The article focuses on the suspension due to these harms, not just potential future risks or general AI news, so it is not a hazard or complementary information. The harms are related to privacy and security breaches, which fall under violations of rights and harm to communities or individuals. Therefore, the event is classified as an AI Incident.
Thumbnail Image

DeepSeek: Coreia do Sul suspende downloads do chatbot

2025-02-17
Olhar Digital - O futuro passa primeiro aqui
Why's our monitor labelling this an incident or hazard?
The DeepSeek chatbot is an AI system whose downloads have been suspended by South Korean authorities because it failed to comply with data protection laws. While this involves the use of an AI system and concerns about personal data privacy, the article does not report any realized harm or violation but rather a regulatory intervention to prevent such harm. The suspension aims to mitigate potential future harm to users' personal data privacy. Hence, this event fits the definition of an AI Hazard, as it plausibly could lead to harm if unaddressed, but no direct or indirect harm has yet been reported.
Thumbnail Image

DeepSeek em investigação da CNPD em Portugal | TugaTech

2025-02-16
TugaTech
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek's AI model) and concerns about its use and compliance with data protection laws, which relates to potential violations of fundamental rights (privacy). However, since no confirmed violation or harm has occurred yet, and the article discusses the plausible risk and regulatory scrutiny, this constitutes an AI Hazard rather than an AI Incident. The investigation and potential future measures indicate a credible risk of harm but no realized harm at this stage.
Thumbnail Image

Visão | Dados de utilizadores recolhidos pela DeepSeek enviados para a dona do TikTok

2025-02-18
Visão
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system that collects user data, and the confirmed communication of this data to ByteDance without explicit user consent constitutes a violation of privacy rights under South Korean law. This represents a breach of obligations intended to protect fundamental rights, specifically data privacy rights. Since the data transfer has occurred and is under regulatory scrutiny, this qualifies as an AI Incident due to the realized harm of potential privacy violations and unauthorized data sharing.
Thumbnail Image

Mais um país baniu o DeepSeek! Coreia do Sul confirma envio de dados para dona do TikTok

2025-02-18
Pplware
Why's our monitor labelling this an incident or hazard?
The DeepSeek AI system is explicitly involved as the subject of the privacy breach. The South Korean authorities confirmed that the AI app collected excessive personal data and transferred it to third parties without user consent, violating privacy laws. This constitutes a direct harm to users' rights and privacy, fulfilling the criteria for an AI Incident under violations of human rights or breach of legal obligations. The event is not merely a potential risk but a realized harm, as data leakage has been confirmed. Therefore, the classification is AI Incident.
Thumbnail Image

Südkorea verbietet chinesische KI DeepSeek

2025-02-17
Salzburger Nachrichten
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek) whose use has been suspended due to data privacy issues, which relate to violations of legal obligations protecting fundamental rights (privacy). Although no direct harm is reported as having occurred, the ban reflects concerns about potential or ongoing violations. Since the AI system's use is stopped pending improvements, this is a governance response to a risk of harm rather than a realized incident. Therefore, this event is best classified as Complementary Information, as it provides an update on regulatory and governance actions concerning AI use and privacy compliance, without reporting a specific AI Incident or AI Hazard.
Thumbnail Image

Südkorea verbietet chinesische KI DeepSeek

2025-02-17
Deutsche Welle
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI chatbot, thus an AI system. The event involves its use leading to significant privacy and data protection concerns, which are violations of rights under applicable law. The South Korean authorities' decision to ban the app from app stores and warnings to users reflect that harm related to privacy has already occurred or is ongoing. This meets the criteria for an AI Incident because the AI system's use has directly or indirectly led to a breach of data protection laws and potential harm to users' privacy rights. The event is not merely a potential risk (hazard) or a complementary information update but a concrete incident involving harm and regulatory action.
Thumbnail Image

DeepSeek: Südkorea sperrt chinesischen Chatbot wegen Datenschutzbedenken

2025-02-17
Spiegel Online
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (DeepSeek chatbot) and concerns about its data privacy practices. The South Korean data protection authority has suspended access due to non-compliance with privacy laws, indicating a potential for violation of rights if the service continued without improvements. Since no actual harm or violation has been reported yet, but the risk is credible and the service is suspended to prevent harm, this qualifies as an AI Hazard rather than an AI Incident. It is not merely complementary information because the suspension is a direct regulatory response to a plausible risk of harm from the AI system's use.
Thumbnail Image

Deepseek gesperrt: China wirft Südkorea politische Motive vor

2025-02-17
20 Minuten
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system as it is a Chinese AI application. The ban is due to data protection concerns, which relate to potential violations of privacy rights, a form of human rights. Although no direct harm is reported, the regulatory action is a response to plausible risks of harm from the AI system's use. Since the event concerns the potential for harm and regulatory intervention rather than an actual realized harm, it fits best as Complementary Information, providing context on governance and societal response to AI-related risks. There is no indication of an actual AI Incident or AI Hazard occurring at this time.
Thumbnail Image

Künstliche Intelligenz: Südkorea verbietet chinesische KI DeepSeek

2025-02-17
Der Tagesspiegel
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system (a chatbot based on open-source language models) whose use has led to realized harms or violations, such as breaches of data protection laws (e.g., GDPR), risks of unauthorized data access, and security vulnerabilities that could harm users or national security. The bans and restrictions by multiple countries are responses to these harms. Since the harms (privacy violations, security risks) are occurring or have occurred due to the AI system's use, this qualifies as an AI Incident under the framework. The article does not merely discuss potential future harms or general AI developments but focuses on concrete regulatory actions taken because of actual or ongoing harms linked to the AI system.
Thumbnail Image

Sicherheitsbedenken: Südkorea verbietet chinesische KI DeepSeek

2025-02-17
Tages Anzeiger
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (DeepSeek, a chatbot) whose use is restricted due to data privacy issues. However, there is no indication that the AI system has caused any direct or indirect harm yet. The ban is a preventive regulatory action addressing potential privacy risks, not a report of realized harm or incident. Therefore, this is best classified as Complementary Information, as it provides context on governance and regulatory responses to AI-related privacy concerns without describing an AI Incident or AI Hazard.
Thumbnail Image

Südkorea verbietet chinesische KI DeepSeek

2025-02-17
Freie Presse
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (DeepSeek chatbot) whose use has led to regulatory bans due to non-compliance with data protection laws and security concerns. While no direct harm (such as confirmed data leaks or attacks) is reported, the concerns about data privacy violations, potential government data access under Chinese law, security vulnerabilities, and the app's ability to generate dangerous content represent credible risks of harm. The bans and restrictions by multiple countries reflect recognition of these plausible harms. Since the harms are potential and preventive actions are being taken, this fits the definition of an AI Hazard rather than an AI Incident. The event is not merely complementary information or unrelated, as it centers on the AI system's risks and regulatory responses to prevent harm.
Thumbnail Image

DeepSeek: Chinas Chatbot im Visier globaler Datenschutzbedenken

2025-02-17
IT BOLTWISE® x Artificial Intelligence
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI chatbot system whose use has directly led to privacy violations and potential information leaks, which are harms to individuals' rights and possibly national security. The article details actual regulatory actions and bans due to these harms, indicating realized harm rather than just potential risk. Therefore, this event qualifies as an AI Incident because the AI system's use has directly or indirectly caused violations of privacy laws and risks to information security, fulfilling the criteria for harm (c) under the AI Incident definition.
Thumbnail Image

Südkorea stoppt Downloads der chinesischen KI-App DeepSeek

2025-02-17
IT BOLTWISE® x Artificial Intelligence
Why's our monitor labelling this an incident or hazard?
The AI chatbot app DeepSeek is an AI system involved in the event. The government's action to stop downloads is due to concerns about data privacy compliance, indicating potential violations of legal obligations protecting personal data. Since no actual harm or incident has been reported, but there is a credible risk that the AI system's use could lead to violations of privacy rights, this qualifies as an AI Hazard. The event is not merely general AI news or a complementary update but a regulatory response to a plausible risk of harm from the AI system's use.
Thumbnail Image

Südkorea stoppt Downloads von DeepSeek wegen Datenschutzbedenken

2025-02-18
IT BOLTWISE® x Artificial Intelligence
Why's our monitor labelling this an incident or hazard?
An AI system (DeepSeek chatbot) is explicitly involved, and the event concerns its use and data processing practices. While no direct harm has been reported, the investigation by the data protection authority indicates plausible risks of privacy violations and national security concerns, which could lead to an AI Incident if confirmed. However, since the article focuses on the investigation and potential risks rather than realized harm, this qualifies as an AI Hazard.
Thumbnail Image

DeepSeek unter Verdacht: Datenaustausch mit TikTok-Eigentümer ByteDance

2025-02-19
IT BOLTWISE® x Artificial Intelligence
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI company with a powerful AI model, and the event centers on allegations of unauthorized user data sharing with ByteDance. This involves the use and potential misuse of an AI system's data handling capabilities. Although no confirmed harm has occurred yet, the plausible risk of privacy violations and data misuse is significant. The suspension of app downloads and government investigation indicate concern over potential legal breaches and harms. Since the article does not confirm actual harm but highlights credible risks, the event is best classified as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Kina përdor modelin e inteligjencës artificiale DeepSeek në qeverisjen lokale

2025-02-20
Anadolu Ajansı
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system (DeepSeek) in local government decision-making. Although no direct harm or incident is reported, the mandated use of AI in governance could plausibly lead to harms such as violations of rights, poor decisions affecting communities, or other significant harms. Since no actual harm has yet occurred or been reported, this qualifies as an AI Hazard rather than an AI Incident.
Thumbnail Image

Top News - Bllokohet aplikacioni kinez. DeepSeek, shtohen shqetësimet mbi sigurinë e të dhënave - Top Channel

2025-02-18
top-channel.tv
Why's our monitor labelling this an incident or hazard?
The involvement of an AI system (DeepSeek) is explicit, and the event concerns its use and compliance with data privacy laws. Although no actual harm has been reported, the blocking action is based on concerns that the AI system's use could lead to violations of privacy rights, which fits the definition of an AI Hazard. Since the event is about preventing potential harm rather than reporting an incident or providing complementary information, it is best classified as an AI Hazard.
Thumbnail Image

Koreja e Jugut heq Deepseek nga dyqanet e aplikacioneve për shkak të shqetësimeve të privatësisë - Telegrafi

2025-02-17
Telegrafi
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek chatbot) whose use has raised privacy concerns, leading to regulatory action to prevent further downloads. While no direct harm is reported as having occurred, the removal is due to plausible risks of privacy violations, which are a form of harm to personal data rights. Therefore, this constitutes an AI Hazard because the AI system's use could plausibly lead to harm, prompting preventive measures. There is no indication of realized harm yet, so it is not an AI Incident. The focus is on potential harm and regulatory response, not on a past incident or complementary information about a past incident.
Thumbnail Image

DeepSeek ndan të dhënat e përdoruesit me pronarin e TikTok - ByteDance, thotë Koreja e Jugut - Telegrafi

2025-02-18
Telegrafi
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI startup, so an AI system is involved. The event describes the use of AI systems in handling user data and the sharing of this data with ByteDance without clear user consent or adequate protection. This has led to regulatory action and concerns about violations of data protection laws and user privacy, which are breaches of fundamental rights. The harm is realized as the data sharing has already occurred, and the regulator has intervened. Hence, this qualifies as an AI Incident due to violations of human rights and legal obligations related to data privacy caused by the AI system's use and data handling practices.
Thumbnail Image

Koreja e Jugut pezullon përdorimin e DeepSeek, Xi nuk "kursen" kritikat

2025-02-17
Ora News
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (DeepSeek) whose use has been suspended due to concerns about data collection practices that threaten personal data privacy. This constitutes a violation of rights under applicable law protecting fundamental rights, fulfilling the criteria for an AI Incident. The harm is realized in the form of privacy concerns significant enough to prompt government intervention. The involvement of the AI system is direct, as the suspension is due to its data handling practices. Hence, this is not merely a potential hazard or complementary information but an AI Incident.
Thumbnail Image

Koreja e Jugut akuza ndaj DeepSeek: Po ndan të dhënat e përdoruesve me pronarin e TikTok, ByteDance

2025-02-18
Ora News
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system implicated in sharing user data improperly, which is a breach of data protection and privacy rights, falling under violations of human rights and legal obligations. The event describes actual harm as regulatory bodies have acted and users have been affected by the data sharing. The AI system's use directly led to these harms, making this an AI Incident rather than a hazard or complementary information. The involvement of AI and the resulting harm to users' rights and data privacy justify classification as an AI Incident.
Thumbnail Image

Seuli nuk ka besim tek Inteligjenca Artificiale kineze: Dyshojmë se mbledh të dhënat

2025-02-17
JavaNews.al
Why's our monitor labelling this an incident or hazard?
The event involves an AI system whose use is suspended due to concerns about data privacy and compliance with legal frameworks. While these concerns indicate a plausible risk of harm (privacy violations), no actual harm or incident is reported. The government's action is a preventive measure to ensure compliance and protect privacy, which fits the definition of Complementary Information as it provides context on governance and regulatory responses to AI-related privacy concerns rather than reporting a realized AI Incident or a direct AI Hazard.
Thumbnail Image

Shqetësime për privatësinë nga programi kinez i inteligjencës artificiale "DeepSeek"

2025-02-19
Opinion.al
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (DeepSeek) whose use has raised significant privacy concerns, leading to regulatory action and restrictions. Although no actual harm (such as data breaches or misuse) is explicitly reported, the potential for harm to users' personal data privacy and related rights is credible and recognized by authorities and experts. Therefore, this situation fits the definition of an AI Hazard, as the AI system's use could plausibly lead to violations of privacy and related rights if not properly managed.
Thumbnail Image

Koreja e Jugut heq Deepseek nga dyqanet e aplikacioneve për shkak të shqetësimeve të privatësisë

2025-02-17
Agjencia e Lajmeve - Zhurnal.mk
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (DeepSeek chatbot) and government intervention due to privacy concerns, which relate to potential violations of personal data protection laws (a form of legal rights). However, there is no indication that actual harm or violations have occurred; rather, the ban aims to prevent such outcomes until the system complies with legal requirements. This fits the definition of an AI Hazard, where the AI system's use could plausibly lead to an AI Incident (privacy violations). The event is not an AI Incident because no harm has been realized, nor is it Complementary Information or Unrelated, as it directly concerns an AI system and potential harm.
Thumbnail Image

Shqetësimet për privatësinë nga programi kinez i inteligjencës artificiale "DeepSeek"

2025-02-19
Portalb
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (DeepSeek) whose use has raised significant privacy concerns, leading to regulatory action and suspension of the app. Although no direct harm or violation has been reported as having occurred, the concerns about data privacy and potential government access to user data represent a plausible risk of harm to users' rights. Therefore, this situation fits the definition of an AI Hazard, as the AI system's use could plausibly lead to violations of privacy and related harms, but no confirmed incident has yet materialized.
Thumbnail Image

Wegen Datenschutzbedenken: Südkorea nimmt DeepSeek aus App-Stores

2025-02-17
tagesschau.de
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (DeepSeek) whose use has raised significant concerns about data privacy violations, potential misuse of personal data, and security vulnerabilities. Multiple countries have taken regulatory actions to restrict or ban the app due to these risks. While no concrete incident of harm (such as data breaches or direct misuse) is reported, the credible and widespread regulatory responses reflect a plausible risk of harm to individuals' privacy and national security. This fits the definition of an AI Hazard, as the AI system's development and use could plausibly lead to an AI Incident involving violations of rights and security breaches. The event is not an AI Incident because no realized harm is described, nor is it Complementary Information or Unrelated.
Thumbnail Image

Südkorea verbietet chinesische KI DeepSeek - WELT

2025-02-17
DIE WELT
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system (a chatbot based on language models) whose use has raised significant security and privacy concerns, including potential misuse to generate dangerous content and data privacy violations. The banning by South Korean authorities and restrictions in other countries reflect recognition of these risks. However, the article does not describe any actual harm or incident caused by the AI system but rather the plausible risks and regulatory actions taken to prevent harm. Therefore, this event fits the definition of an AI Hazard, as the AI system's use could plausibly lead to harms such as privacy violations or dangerous content dissemination, but no direct or indirect harm has been reported yet.
Thumbnail Image

Südkorea verbietet chinesische KI DeepSeek

2025-02-17
Cash
Why's our monitor labelling this an incident or hazard?
The article discusses regulatory and governmental responses to the AI system DeepSeek, including bans and usage restrictions, but does not report any actual harm or incident caused by the AI system. The focus is on policy and control measures rather than a realized AI Incident or a plausible future harm event. Therefore, this is best classified as Complementary Information, as it provides context on governance and societal responses to an AI system without describing a specific AI Incident or AI Hazard.
Thumbnail Image

Südkorea verbietet chinesische KI DeepSeek

2025-02-17
wallstreet:online
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system (a chatbot based on language models) whose use has led to direct concerns about violations of data protection laws (human rights and legal obligations) and risks to national security (harm to communities). The bans and investigations are responses to these realized harms. The app's data handling practices and security flaws have already caused regulatory actions and restrictions, indicating that harm has occurred or is ongoing. Therefore, this qualifies as an AI Incident because the AI system's use has directly or indirectly led to violations of rights and potential harm to communities and national security.
Thumbnail Image

Südkorea erlässt Verbot für chinesische KI DeepSeek

2025-02-17
finanzen.ch
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system (a chatbot based on open-source language models) whose use has led to significant concerns about privacy violations, data security, and national security risks. The app's data handling practices and security flaws have already prompted regulatory actions, including bans and investigations, indicating that harms related to rights violations and security have materialized or are imminent. The involvement of the AI system in these harms is direct, as the app's design and operation cause or enable these risks. Therefore, this event qualifies as an AI Incident due to realized or ongoing violations of privacy and security rights linked to the AI system's use.
Thumbnail Image

ROUNDUP: Südkorea verbietet chinesische KI DeepSeek

2025-02-17
Börse Online
Why's our monitor labelling this an incident or hazard?
DeepSeek is explicitly described as an AI chatbot based on language models, thus qualifying as an AI system. The event involves the use of this AI system and its non-compliance with data protection laws (e.g., GDPR), which constitutes a violation of legal obligations protecting fundamental rights. Additionally, security vulnerabilities and data handling practices pose risks to users' privacy and national information security, which are harms to communities and violations of rights. The bans and restrictions indicate that these harms are recognized and have occurred or are ongoing. Hence, the event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Südkorea verbietet chinesische KI DeepSeek

2025-02-17
Weser Kurier
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system (a chatbot based on language models) whose use has led to realized harms or violations, specifically breaches of data protection laws and risks to information security. The bans and restrictions by multiple national authorities are responses to these harms. The event involves the use of the AI system leading to violations of privacy and security concerns, which fall under violations of human rights and legal obligations. Therefore, this is an AI Incident rather than a mere hazard or complementary information. The article details actual regulatory actions taken due to these harms, not just potential risks or general AI news.
Thumbnail Image

Südkorea verbietet chinesische KI DeepSeek - Web-News - Reutlinger General-Anzeiger - gea.de

2025-02-17
Reutlinger General-Anzeiger
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (DeepSeek chatbot) whose use has led to regulatory bans and restrictions due to violations of data protection laws and security concerns. The harms include breaches of privacy rights, potential surveillance risks, and threats to national security, which fall under violations of human rights and legal obligations. These harms are realized and ongoing, not merely potential. Hence, the event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Künstliche Intelligenz: Südkorea verbietet chinesische KI DeepSeek - Verlagshaus Jaumann

2025-02-17
Die Oberbadische - Markgräfler Tagblatt - Weiler Zeitung
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (DeepSeek chatbot) and discusses its development and use. The concerns raised relate to potential violations of data protection laws and privacy rights, as well as cybersecurity risks that could lead to harm. Since no actual harm or incident is reported, but credible risks and vulnerabilities are identified that could plausibly lead to harm, this event fits the definition of an AI Hazard. It is not Complementary Information because the focus is on the risks and regulatory response (ban) rather than updates or responses to a past incident. It is not an AI Incident because no direct or indirect harm has occurred yet.
Thumbnail Image

Uzmanlara göre DeepSeek'e yönelik yasaklar "meşru" olabilir ancak soru işaretleri var

2025-02-14
Anadolu Ajansı
Why's our monitor labelling this an incident or hazard?
DeepSeek is explicitly described as an AI system (a generative AI model) whose development and use have raised security and privacy concerns. The bans in various jurisdictions are motivated by plausible risks of data misuse, unauthorized government access, and national security threats. These concerns constitute potential harms that could arise from the AI system's use, fitting the definition of an AI Hazard. There is no indication in the article that any direct harm (such as data breaches, privacy violations, or other incidents) has already occurred due to DeepSeek. The article mainly provides expert opinions and geopolitical context explaining why the bans might be justified, which aligns with the AI Hazard classification rather than an AI Incident or Complementary Information. It is not unrelated because the AI system and its risks are central to the discussion.
Thumbnail Image

Güney Kore'de yapay zeka yasağı: İndirilmesi engellendi

2025-02-17
Sabah
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek) whose use raises concerns about personal data privacy and security. The ban on downloading the AI model is a regulatory response to potential risks, indicating a plausible future harm related to personal data violations if the AI were used without proper safeguards. Since no actual harm is reported yet, but there is a credible risk leading to regulatory intervention, this qualifies as an AI Hazard rather than an Incident. The article does not describe realized harm or ongoing violations, so it is not an AI Incident. It is also not merely complementary information, as the main focus is on the ban due to potential harm, nor is it unrelated.
Thumbnail Image

Güney Kore yetkilileri, DeepSeek'in uygulama mağazalarından indirilmesini engelledi

2025-02-17
Webrazzi
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system (an AI application) whose use involves processing personal data. The authorities' intervention is due to concerns about privacy law compliance and data transfer to third parties, which could lead to violations of users' rights (a form of harm under the framework). However, the article does not report any actual realized harm yet, only potential risks and regulatory measures to prevent harm. Therefore, this event constitutes an AI Hazard, as the AI system's use could plausibly lead to an AI Incident involving privacy violations if not properly managed.
Thumbnail Image

Güney Kore, Çinli yapay zeka platformu DeepSeek'in kullanımını yasakladı

2025-02-17
T24
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system providing generative AI services. The ban is due to concerns about its data collection and management practices potentially violating personal data protection laws, which relate to human rights and legal obligations. The article does not report actual harm or incidents caused by the AI system but focuses on regulatory intervention to prevent such harm. Hence, it fits the definition of an AI Hazard, where the AI system's use could plausibly lead to violations of rights if not properly regulated. There is no indication of realized harm or incident, so it is not an AI Incident. It is not merely complementary information because the main focus is on the regulatory ban due to potential harm, not on responses or ecosystem updates. It is not unrelated as it clearly involves an AI system and regulatory action concerning its use.
Thumbnail Image

DeepSeek, Gizlilik Endişeleri Nedeniyle Birkaç Ülkede Daha Yasaklandı

2025-02-17
Technopat
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system involved in data processing and user interaction. The bans and regulatory scrutiny stem from concerns about privacy violations and data transfer, which relate to potential breaches of legal obligations protecting personal data and privacy rights. Since no actual harm is reported but there is a credible risk of violations and misuse, this situation constitutes an AI Hazard rather than an Incident. The article mainly reports on governmental responses and regulatory measures, indicating plausible future harm if the AI system's practices continue unmitigated.
Thumbnail Image

Gerekçesi güvenlik mi? Rekabet mi? DeepSeek'in yükselişi ve yasaklar: Uzmanlar değerlendirdi!

2025-02-14
Dünya
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (DeepSeek) and discusses its development, use, and the resulting geopolitical and security concerns. However, it does not describe any actual harm caused by the AI system, nor does it report a specific event where harm was realized or a near-miss incident. The concerns about data privacy and security are speculative and relate to potential risks rather than documented incidents. The bans and expert opinions reflect governance and societal responses to perceived risks rather than an AI Incident or Hazard. Hence, the article fits the definition of Complementary Information, providing supporting context and analysis without reporting a new AI Incident or AI Hazard.
Thumbnail Image

DeepSeek'in yükselişi ve yasaklar: Uzmanlar değerlendirdi!

2025-02-14
Dünya
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system (a large language model) whose use and data collection practices have triggered security concerns leading to bans in multiple jurisdictions. The article highlights potential risks related to data privacy and geopolitical access to data, which could plausibly lead to violations of rights or other harms. However, no actual harm or incident is reported; the bans are precautionary measures based on plausible future risks. Thus, the event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Güney Kore, DeepSeek'in indirilmesini yasakladı - Diken

2025-02-17
Diken
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (DeepSeek) and regulatory concerns about its data collection and security practices. However, it does not describe any actual harm or incident caused by the AI system's development, use, or malfunction. Instead, it reports a precautionary regulatory measure to prevent potential risks. This fits the definition of Complementary Information, as it provides context on governance and societal responses to AI-related risks without describing a specific AI Incident or AI Hazard.
Thumbnail Image

DeepSeek паниката, част 4

2025-02-17
Bloomberg
Why's our monitor labelling this an incident or hazard?
The event involves the use and development of AI systems (DeepSeek's AI model) and highlights geopolitical and technological concerns about AI capabilities and restrictions. However, there is no indication that this development has directly or indirectly caused any harm such as injury, rights violations, or disruption. The article focuses on competitive innovation, sanctions circumvention, and strategic implications rather than any realized or imminent harm. Therefore, it does not meet the criteria for an AI Incident or AI Hazard. It is best classified as Complementary Information, providing context and updates on AI ecosystem developments and governance concerns.
Thumbnail Image

Южна Корея блокира чатбота DeepSeek

2025-02-17
Vesti.bg
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI chatbot whose operation involves processing user data. The authorities have blocked its availability pending a thorough investigation into its data protection compliance, indicating concerns about potential violations of privacy rights. Since no actual harm or breach has been reported yet, but there is a credible risk that the AI system's use could lead to violations of data protection laws, this fits the definition of an AI Hazard rather than an AI Incident. The event is not merely complementary information because the main focus is on the potential risk and regulatory action, not on a response to a past incident.
Thumbnail Image

Южна Корея спря достъпа до DeepSeek

2025-02-17
nova.bg
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (DeepSeek chatbot) whose data handling practices are under scrutiny for compliance with personal data protection laws. The suspension of the app's availability is a preventive measure to avoid potential violations of user privacy rights, which constitute a breach of fundamental rights under applicable law. Since no actual harm or violation has been confirmed or reported yet, but there is a credible risk that the AI system's use could lead to such harm, this event fits the definition of an AI Hazard. It is not Complementary Information because the main focus is on the regulatory action and potential risk, not on updates or responses to a past incident. It is not an AI Incident because no realized harm has occurred.
Thumbnail Image

Южна Корея спря DeepSeek

2025-02-17
frognews.bg
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek chatbot) whose use is under scrutiny for data privacy compliance. The authorities have not reported any realized harm but have taken preventive action by suspending the app pending investigation and improvements. This fits the definition of an AI Hazard, as the AI system's use could plausibly lead to violations of data protection laws and associated harms if not addressed. Since no actual harm has been reported, it is not an AI Incident. It is not Complementary Information because the article focuses on the suspension and investigation, not on updates or responses to a past incident. It is not Unrelated because the AI system and potential harm are central to the event.
Thumbnail Image

Южна Корея спря AI чатбота DeepSeek - Новини от Dnes.bg

2025-02-17
Dnes.bg
Why's our monitor labelling this an incident or hazard?
The AI chatbot DeepSeek is explicitly mentioned and is an AI system. The event stems from its use and the concerns about its data processing practices potentially violating local data protection laws, which protect fundamental rights. No actual harm or incident is reported yet, but the regulatory actions and warnings indicate a credible risk of harm to users' privacy and rights. Hence, this is an AI Hazard rather than an AI Incident. The article focuses on regulatory responses and potential risks rather than realized harm, so it is not Complementary Information. It is clearly related to AI and potential harm, so it is not Unrelated.
Thumbnail Image

И Южна Корея спря DeepSeek

2025-02-17
bTV Новините
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI chatbot, so an AI system is involved. The authorities' action to suspend the app's availability is due to concerns about data privacy compliance, which relates to potential violations of legal rights (human rights and data protection laws). Since no actual harm or incident is reported, but there is a credible risk that the AI system's use could lead to violations of rights, this fits the definition of an AI Hazard. The event is not a Complementary Information piece because it is not an update or response to a previously known incident but a new regulatory action based on potential risk. It is not an AI Incident because no harm has been realized yet.
Thumbnail Image

Южна Корея спира временно чатбот

2025-02-17
Actualno.com
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI chatbot whose data handling practices are under investigation for compliance with privacy laws. The article mentions an open database with over 1 million records, implying a data exposure incident. The authorities have suspended the chatbot's availability pending investigation and improvements, indicating that harm related to privacy rights has occurred or is ongoing. This fits the definition of an AI Incident because the AI system's use has directly or indirectly led to a violation of human rights (privacy/data protection). The event is not merely a potential risk (hazard) or a complementary update but a concrete incident involving harm or legal breach.
Thumbnail Image

Южна Корея секна изкуствения интелект на Китай

2025-02-18
Blitz.bg
Why's our monitor labelling this an incident or hazard?
The AI system (DeepSeek R1 chatbot) is explicitly mentioned and is involved in processing personal data. The event stems from the use of the AI system and concerns about its compliance with data protection laws, which are designed to protect fundamental rights. While no direct harm has been confirmed, the investigation and app removal indicate plausible risks of violations of privacy rights. This fits the definition of an AI Hazard, as the AI system's use could plausibly lead to violations of human rights (privacy) if unaddressed. Since no actual harm or incident is reported yet, and the focus is on regulatory action and investigation, the classification is AI Hazard.
Thumbnail Image

كوريا الجنوبية تحظر تطبيق DeepSeek بشكل مؤقت لهذا السبب - اليوم السابع

2025-02-17
اليوم السابع
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek) whose use has raised concerns about privacy and data protection, which relate to violations of human rights under applicable law. Although no realized harm has been reported, the temporary ban and advisories indicate a plausible risk of harm if the AI system's data handling practices are non-compliant. Therefore, this situation constitutes an AI Hazard because it plausibly could lead to an AI Incident involving violations of privacy rights if the issues are not resolved. It is not an AI Incident yet because no actual harm has been documented, nor is it merely complementary information or unrelated news.
Thumbnail Image

"تعاون مع TikTok لتسريب البيانات الشخصية".. كوريا الجنوبية توجه اتهامات إلى DeepSeek | المصري اليوم

2025-02-18
AL Masry Al Youm
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (DeepSeek) whose use has directly led to harm in the form of violations of data protection laws and user privacy rights, which are fundamental human rights. The regulatory authority's findings of data sharing with ByteDance and lack of transparency confirm the AI system's role in causing these harms. The removal of the app from stores and warnings to users further indicate that harm has materialized. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

كوريا الجنوبية توقف تحميل تطبيق DeepSeek بسبب مخاوف أمنية | البوابة التقنية

2025-02-17
البوابة العربية للأخبار التقنية
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system (an intelligent assistant app) whose use has raised data privacy and security concerns leading to regulatory bans and investigations. The article highlights potential risks of data misuse and non-compliance with local data protection laws, which could plausibly lead to violations of rights or harm to users if unaddressed. However, no actual harm or incident is reported, only preventive regulatory measures and ongoing investigations. Thus, this qualifies as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

كوريا الجنوبية تحظر تحميل تطبيق DeepSeek الصيني للذكاء الاصطناعي - عالم التقنية

2025-02-17
عالم التقنية
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (DeepSeek) and concerns about its handling of personal data, including unauthorized data transfer to another company. While no actual harm or violation has been confirmed or reported as having occurred, the authorities' preventive restrictions indicate a credible risk of privacy violations and potential harm to users. The event involves the use of an AI system and plausible future harm related to data privacy, fitting the definition of an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

كوريا الجنوبية تحظر تطبيق DeepSeek بشكل مؤقت

2025-02-18
المحترف: شروحات برامج مكتوبة ومصورة بالفيديو | Almohtarif
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (DeepSeek) whose use is under scrutiny for privacy and data protection issues. Although there is no explicit report of realized harm, the regulatory actions and warnings indicate a plausible risk of harm related to privacy violations. Since the event focuses on potential risks and regulatory response rather than an actual incident causing harm, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

كوريا الجنوبية تحظر تنزيلات DeepSeek - الوطن

2025-02-17
جريدة الوطن
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI chatbot application, clearly an AI system. The ban on new downloads by South Korean authorities is due to concerns about compliance with personal data protection laws and national security, indicating potential for harm related to privacy violations and security risks. No actual harm is reported yet, but the regulatory response shows a credible risk of harm if the AI system were used without improvements. Hence, this is an AI Hazard, reflecting plausible future harm from the AI system's use, rather than an AI Incident where harm has already occurred.
Thumbnail Image

كوريا الجنوبية تزيل تطبيق DeepSeek من متاجر التطبيقات في انتظار مراجعة الخصوصية

2025-02-18
وكالة الصحافة المستقلة
Why's our monitor labelling this an incident or hazard?
The article clearly identifies DeepSeek's AI chatbot as the AI system involved. The removal from app stores and regulatory scrutiny stem from the AI system's use and its failure to comply with privacy laws, which is a legal and rights-related concern. However, the article does not report any realized harm such as injury, rights violations with complaints or lawsuits, or other significant harms caused by the AI system's operation. Instead, it describes regulatory actions and precautionary measures taken to prevent potential privacy harms. This fits the definition of Complementary Information, as it details governance and societal responses to AI-related privacy issues without describing a new AI Incident or AI Hazard.
Thumbnail Image

سول تتهم DeepSeek بمشاركة بيانات المستخدمين مع "بايت دانس"

2025-02-20
Asharq News
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (DeepSeek) and its use, specifically the handling and sharing of user data. The South Korean data protection authority has identified non-compliance with privacy laws and insufficient transparency, which directly implicates violations of users' rights. The harm is realized as users' personal data has been shared without clear consent, leading to regulatory actions and user warnings. This fits the definition of an AI Incident because the AI system's use has directly led to a breach of obligations under applicable law intended to protect fundamental rights (privacy). The event is not merely a potential risk but an actual incident with confirmed data sharing and regulatory response.
Thumbnail Image

اعلام ممنوعیت بارگذاری هوش مصنوعی در کره جنوبی

2025-02-17
tabnak.ir
Why's our monitor labelling this an incident or hazard?
An AI system (DeepSic) is explicitly mentioned and is involved in the event. The issue stems from the AI system's use, specifically its non-compliance with data privacy laws, which constitutes a violation of legal obligations protecting fundamental rights. This violation has already occurred, as evidenced by the regulatory action and the ban on uploading the AI system until compliance is achieved. Therefore, this event qualifies as an AI Incident due to the realized harm (privacy rights violations) caused by the AI system's use.
Thumbnail Image

"دیپ سیک" در فروشگاه‌های برنامه کره جنوبی مسدود شد

2025-02-17
عصر ايران،سايت تحليلي خبري ايرانيان سراسر جهان www.asriran.com
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system, DeepSeek, which is an AI model competing with ChatGPT. The regulatory actions stem from concerns about how the AI system collects and processes personal data, including transferring data to a foreign company, which could violate privacy rights (a form of human rights violation). However, the article does not report any actual harm or incident caused by the AI system's use, only potential risks and regulatory precautionary measures. The blocking of downloads and advisories to users indicate a plausible risk of harm if the AI system's data practices are not compliant. Hence, this fits the definition of an AI Hazard, as the development and use of the AI system could plausibly lead to an AI Incident involving privacy violations or security harms, but no direct or indirect harm has yet occurred.
Thumbnail Image

دیپ سیک در کره جنوبی ممنوع شد

2025-02-17
خبرگزاری باشگاه خبرنگاران | آخرین اخبار ایران و جهان | YJC
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (DeepSeek) whose service was suspended due to concerns about data collection practices and compliance with personal data protection laws in South Korea. The suspension and investigation indicate that the AI system's use could plausibly lead to violations of privacy rights, which fall under harm category (c) - violations of human rights or breach of legal obligations. Since no actual harm is reported as having occurred yet, but the risk is credible and has led to regulatory action, this event fits the definition of an AI Hazard rather than an AI Incident. It is not Complementary Information because the main focus is on the suspension due to potential harm, not on responses to a past incident or broader ecosystem updates. It is not Unrelated because the event directly involves an AI system and potential harm.
Thumbnail Image

"دیپ سیک" در فروشگاه‌های برنامه کره جنوبی مسدود شد

2025-02-17
ایسنا
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system ('DeepSeek') and details regulatory measures taken due to concerns about data privacy and security risks. The AI system's development and use are implicated in potential violations of personal data protection laws and security concerns. However, the article does not report any realized harm or incident caused by the AI system, only potential risks and precautionary blocking. Hence, it fits the definition of an AI Hazard, where the AI system's use could plausibly lead to harm but no direct or indirect harm has yet occurred.
Thumbnail Image

"دیپ سیک" در فروشگاه‌های برنامه کره جنوبی مسدود شد

2025-02-17
جهان مانا - پایگاه خبری اطلاع رسانی
Why's our monitor labelling this an incident or hazard?
An AI system ('DeepSeek') is explicitly involved, as it is an AI startup offering an AI model competing with ChatGPT. The event stems from the use and deployment of this AI system, specifically regarding its data handling practices. Although no direct harm has been reported, the regulatory blocking and warnings indicate a plausible risk of harm to users' privacy and security, which could constitute violations of rights if realized. Therefore, this event fits the definition of an AI Hazard, as it could plausibly lead to an AI Incident involving privacy violations or security breaches. The event is not an AI Incident because no actual harm has been reported yet, nor is it Complementary Information or Unrelated.
Thumbnail Image

کره جنوبی دیپ سیک را از فروشگاه‌های اپلیکیشن خود حذف کرد

2025-02-18
جهان مانا - پایگاه خبری اطلاع رسانی
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (DeepSeek chatbot) whose use and data handling practices have raised concerns about privacy and security, prompting regulatory intervention. Although no direct harm has been reported, the potential for violation of personal data rights and security risks is credible. Therefore, this event constitutes an AI Hazard, as the AI system's use could plausibly lead to harm (privacy violations and security risks) if not properly managed. The event is not an AI Incident because no realized harm is described, nor is it merely complementary information or unrelated news.
Thumbnail Image

Coreia do Sul retira DeepSeek das lojas de aplicativos enquanto revisa políticas de privacidade

2025-02-17
Terra
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (DeepSeek chatbot) and concerns about its data processing practices potentially violating privacy laws. The authorities have removed the app from stores as a precaution while reviewing compliance, indicating a plausible risk of harm (privacy violations) but no confirmed incident of harm or rights violation has occurred yet. This fits the definition of an AI Hazard, as the event could plausibly lead to an AI Incident if the privacy issues are not resolved, but no direct or indirect harm has been reported so far.
Thumbnail Image

Coreia do Sul proíbe DeepSeek

2025-02-17
Brasil 247
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI-related service (likely involving AI systems given the context of data collection and management by a startup) whose practices are under regulatory scrutiny for compliance with data protection laws. The suspension and investigation indicate potential violations of legal obligations protecting personal data, which falls under violations of applicable law intended to protect fundamental rights. Since the article describes an ongoing investigation and suspension but does not report actual harm yet, this event represents a plausible risk of harm related to AI system use, qualifying it as an AI Hazard rather than an Incident. It is not merely complementary information because the suspension and regulatory action indicate a significant potential for harm, but no realized harm is reported yet.
Thumbnail Image

Coreia do Sul vai desenvolver aplicação de IA após bloquear chinesa DeepSeek

2025-02-20
Observador
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (DeepSeek, a language model) and concerns about its data management practices violating South Korean data protection laws. The suspension of the app and warnings to users reflect a governance and regulatory response to potential privacy risks. There is no explicit mention of realized harm such as injury, rights violations, or other direct impacts caused by the AI system. The event focuses on regulatory action and future development plans, which fits the definition of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

DeepSeek investigada por vazar dados de usuários para China

2025-02-20
O Antagonista
Why's our monitor labelling this an incident or hazard?
The DeepSeek chatbot is an AI system that processes user data. The investigation concerns the illegal sharing of personal data, which constitutes a violation of privacy rights and applicable data protection laws. This is a breach of obligations intended to protect fundamental rights, specifically privacy and data protection. The event describes realized harm (privacy violations and potential misuse of personal data), not just potential harm. Hence, it meets the criteria for an AI Incident under the framework, as the AI system's use has directly or indirectly led to harm related to rights violations.