DeepSeek AI Faces Global Scrutiny Over Data Privacy Concerns

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Italy's data protection authority, Garante, has demanded Chinese AI company DeepSeek clarify its data collection practices and storage locations, amid fears of privacy risks to millions of Italians. DeepSeek's app has been temporarily removed from Italian app stores. Australia and the US have also expressed concerns over potential privacy and security issues.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions an AI system (DeepSeek) and the restrictions imposed by organizations due to fears of information leakage and privacy breaches. Although no actual harm is reported as having occurred, the concerns about potential data leaks and privacy violations constitute a plausible risk of harm. Therefore, this situation fits the definition of an AI Hazard, as the use or presence of the AI system could plausibly lead to harm related to privacy and information security.[AI generated]
AI principles
Privacy & data governanceTransparency & explainabilityAccountabilityRespect of human rightsRobustness & digital security

Industries
Digital securityIT infrastructure and hostingConsumer services

Affected stakeholders
Consumers

Harm types
Human or fundamental rights

Severity
AI hazard

AI system task:
Other


Articles about this incident or hazard

Thumbnail Image

Chinesische KI-App: Italien sperrt Zugang zu DeepSeek - auch USA setzen die App unter Druck

2025-01-31
Tages-Anzeiger
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (DeepSeek) and discusses its rapid market success and regulatory scrutiny, which relates to governance and societal responses to AI. There is no indication that the AI system has caused any direct or indirect harm (such as privacy violations confirmed, health or safety issues, or other harms). The regulatory investigation and suspicions of misuse are ongoing and do not yet constitute an incident or hazard. Hence, the event is Complementary Information as it updates on governance and market developments related to AI without describing a specific AI Incident or AI Hazard.
Thumbnail Image

Chinesische KI-App: Italien sperrt Zugang zu DeepSeek - auch USA setzen die App unter Druck

2025-01-31
Der Bund
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system involved in data processing and AI-generated content. The article highlights regulatory investigation and legal concerns about data privacy and compliance with GDPR, which are important governance and societal responses to AI risks. There is no indication that DeepSeek's use has directly or indirectly caused harm such as rights violations or other damages. The focus is on potential legal non-compliance and regulatory action, not on an AI Incident or a plausible future harm (hazard). Hence, the event is Complementary Information, providing updates on governance and regulatory scrutiny rather than reporting a new AI Incident or AI Hazard.
Thumbnail Image

「ディープシーク」利用を制限|埼玉新聞|埼玉の最新ニュース・スポーツ・地域の話題

2025-01-30
大谷、ワールドシリーズ2連覇を|埼玉新聞|埼玉の最新ニュース・スポーツ・地域の話題
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (DeepSeek) and the restrictions imposed by organizations due to fears of information leakage and privacy breaches. Although no actual harm is reported as having occurred, the concerns about potential data leaks and privacy violations constitute a plausible risk of harm. Therefore, this situation fits the definition of an AI Hazard, as the use or presence of the AI system could plausibly lead to harm related to privacy and information security.
Thumbnail Image

「ディープシーク」個人情報収集に懸念 韓国政府が質問書送付 | 聯合ニュース

2025-01-31
聯合ニュース
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek's generative AI) and concerns about personal data collection and protection. However, the article describes a government inquiry and precautionary measures rather than an incident where harm has occurred or a hazard where harm is imminent. Therefore, this is a governance and regulatory response providing complementary information about ongoing oversight and potential future investigation, not a direct or indirect AI Incident or an AI Hazard.
Thumbnail Image

ディープシークの利用制限広がる 世界で数百の企業や政府機関、情報流出を懸念

2025-01-30
産経新聞:産経ニュース
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (DeepSeek) and concerns about its use leading to information leaks and privacy violations. The response by organizations to restrict its use reflects recognition of a credible risk of harm, but no actual harm or incident is reported. Therefore, this qualifies as an AI Hazard, where the AI system's use could plausibly lead to an AI Incident involving privacy and information security harms.
Thumbnail Image

「ディープシーク」利用を制限 世界で数百、中国へ情報流出懸念|信濃毎日新聞デジタル 信州・長野県のニュースサイト

2025-01-31
信濃毎日新聞デジタル
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek) and concerns about its use leading to potential information leakage and privacy harm. While the article reports on restrictions and concerns, it does not describe any realized harm or incident caused by the AI system. Therefore, this qualifies as an AI Hazard, as the use or misuse of the AI system could plausibly lead to harm (information leakage, privacy violations), but no direct or indirect harm has been reported yet.
Thumbnail Image

台湾の公的機関、中国ディープシークを全面禁止に 情報漏洩リスク「排除できない」

2025-02-03
産経新聞:産経ニュース
Why's our monitor labelling this an incident or hazard?
An AI system (DeepSeek) is explicitly mentioned, and its use by public institutions is prohibited due to risks of information leakage and legal concerns related to data sourcing and bias. Although no specific harm has been reported as having occurred, the government perceives a credible risk of harm to national information security and legal compliance. This constitutes an AI Hazard because the AI system's use could plausibly lead to incidents involving information security breaches or violations of intellectual property rights, but no actual harm is described as having materialized yet.
Thumbnail Image

米国防総省の職員、DeepSeekを職場で利用-アクセス遮断前の数日間

2025-01-31
Bloomberg.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (DeepSeek AI chatbot) by U.S. defense personnel, with data stored on Chinese servers, which could plausibly lead to security or privacy harms. However, the article does not report any realized harm such as data leaks, espionage, or operational disruption. The access was blocked after a short period, and no direct or indirect harm has been described. Therefore, this situation constitutes an AI Hazard, as the use of the AI system in this context could plausibly lead to an AI Incident, but no incident has yet occurred.
Thumbnail Image

DeepSeek利用のリスクを2人の専門家が指摘、収集データの「多さ」と「扱い」を注視

2025-01-31
日経クロステック(xTECH)
Why's our monitor labelling this an incident or hazard?
The article centers on expert warnings about the risks and privacy concerns related to the use of the DeepSeek AI system, focusing on data collection, legal compliance, and potential misuse of information. No direct or indirect harm has been reported as having occurred yet. The concerns are about plausible future harms, such as violations of privacy laws or confidentiality breaches, and the potential for data misuse under Chinese law. Therefore, this qualifies as an AI Hazard because it plausibly could lead to an AI Incident if these risks materialize, but no incident has been described as having occurred.
Thumbnail Image

台湾、中国企業「ディープシーク」のAI使用を禁止 公的機関対象に

2025-02-03
毎日新聞
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (DeepSeek's generative AI service) and concerns about its use leading to potential violations of privacy, data protection laws, and copyright laws, which are harms under the AI Incident definition. However, since the article only reports a ban to prevent these harms and does not describe any realized harm or incident caused by the AI system, it fits the definition of an AI Hazard—an event where AI use could plausibly lead to harm. Therefore, this is classified as an AI Hazard.
Thumbnail Image

台湾、全ての政府機関でDeepSeek使用を禁止-セキュリティー上の懸念

2025-02-03
Bloomberg.com
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (DeepSeek) and its use by government and critical infrastructure entities. The ban and warnings are due to plausible security risks and potential harm (information leakage, data sharing with a foreign government) that could lead to harm to critical infrastructure and national security. Since no actual harm has been reported yet, but the risk is credible and significant, this qualifies as an AI Hazard. The event is not merely general AI news or a complementary update but a concrete governmental action based on plausible future harm from the AI system's use.
Thumbnail Image

ディープシークは「中国製品」、公的機関の使用禁止 台湾

2025-02-01
afpbb.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek's R1 generative AI) whose use is directly linked to potential violations of national security and personal data protection laws, which fall under violations of human rights and breach of legal obligations. Although no direct harm is reported as having occurred, the government's ban and investigations indicate credible concerns about the AI system's role in threatening information security and privacy. Since the article focuses on the potential and ongoing risks posed by the AI system's use, including official prohibitions and investigations, this qualifies as an AI Hazard rather than an AI Incident, as no realized harm is explicitly described yet.
Thumbnail Image

中国のAI「ディープシーク」利用を制限 世界の数百企業や政府機関

2025-01-31
琉球新報デジタル
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions a generative AI system (DeepSeek) developed by a Chinese company, which is being restricted by hundreds of organizations due to concerns about information leakage and privacy violations linked to Chinese government laws. These concerns indicate a credible risk of violations of privacy and possibly human rights if the AI system were used without restrictions. Since no actual harm or incident is reported, but the potential for harm is clearly articulated and has led to preventive actions, the event fits the definition of an AI Hazard.
Thumbnail Image

「ディープシーク」利用を制限 世界で数百、中国へ情報流出懸念

2025-01-30
神戸新聞NEXT
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions a generative AI system (DeepSeek) and concerns about information leakage and privacy violations linked to its use. The involvement of the AI system is clear, and the concerns stem from its use and the legal obligations of the company under Chinese law. However, the article does not report any realized harm but rather the plausible risk of harm, leading to restrictions on its use. This fits the definition of an AI Hazard, as the event involves plausible future harm due to the AI system's use, but no direct or indirect harm has yet materialized.
Thumbnail Image

伊当局、中国AIを制限 個人情報収集巡り調査:時事ドットコム

2025-01-30
時事ドットコム
Why's our monitor labelling this an incident or hazard?
The event involves the use of a generative AI system by a company that is processing personal data, which is a clear AI system involvement. The Italian authority's restriction and investigation stem from concerns about the AI system's use of personal data and potential legal violations. However, the article does not report any realized harm or incident but rather a regulatory action and investigation due to potential risks. Therefore, this event is best classified as Complementary Information, as it provides context on governance and regulatory responses to AI-related privacy concerns without describing a specific AI Incident or AI Hazard.
Thumbnail Image

中国AI「ディープシーク」、政府の個人情報保護委が注意喚起...「中国の法令が適用」

2025-02-03
読売新聞オンライン
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (a Chinese generative AI) and discusses concerns related to data privacy and jurisdictional legal implications. However, it does not describe any actual harm or incident caused by the AI system, nor does it report a specific event where harm occurred or was narrowly avoided. Instead, it provides a cautionary advisory to users, which fits the definition of Complementary Information as it supports understanding of potential risks and governance issues without reporting a new incident or hazard.
Thumbnail Image

個人情報保護委員会、DeepSeek利用に留意を 「中国の法令が適用」

2025-02-03
日本経済新聞
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (DeepSeek, a generative AI) and highlights risks related to data privacy and legal compliance. Although no direct harm has been reported yet, the article points out a credible risk that the AI system's use could lead to violations of personal data protection rights due to the application of Chinese laws and potential data collection by authorities. Therefore, this situation constitutes an AI Hazard because it plausibly could lead to an AI Incident involving violations of human rights or legal obligations related to personal data protection.
Thumbnail Image

今夜のNEXT 中国AI「DeepSeek」の実力と懸念

2025-02-03
日本経済新聞
Why's our monitor labelling this an incident or hazard?
The article primarily provides an overview and analysis of the AI system DeepSeek and its societal and market impact, including regulatory responses and investment activities. There is no direct or indirect report of harm caused by the AI system, nor a plausible imminent risk of harm detailed. The mention of mental health trends is contextual and not directly linked causally to the AI system. Therefore, the content fits the definition of Complementary Information, as it enhances understanding of AI developments and concerns without reporting a new AI Incident or AI Hazard.
Thumbnail Image

平デジタル相、DeepSeek「公務員の利用控えるべきだ」

2025-02-02
日本経済新聞
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (a generative AI developed by DeepSeek) and discusses concerns about its use, particularly regarding data protection and privacy risks. However, there is no indication that any harm has yet occurred or that a specific incident involving this AI system has taken place. The warning is about potential risks and advises caution, which aligns with a plausible future risk rather than a realized harm. Therefore, this qualifies as an AI Hazard, as the AI system's use could plausibly lead to harm related to data protection and privacy if not properly managed.
Thumbnail Image

台湾、当局機関のDeepSeek利用禁止 情報リスク指摘

2025-01-31
日本経済新聞
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions a generative AI system (DeepSeek) and the government's decision to prohibit its use in official agencies due to security and information leakage risks. However, there is no indication that any harm has already occurred. The event is about a preventive measure addressing potential risks, which fits the definition of an AI Hazard or Complementary Information. Since the main focus is on the government's policy response to potential risks rather than an incident or direct harm, this is best classified as Complementary Information.
Thumbnail Image

ディープシーク開発の生成AI 平デジタル相"利用控えるよう" | NHK

2025-02-01
NHKオンライン
Why's our monitor labelling this an incident or hazard?
The event involves a generative AI system whose use raises concerns about personal data protection, but no actual harm or incident has been reported yet. The minister's call to limit use and the mention of possible regulatory action indicate a recognition of plausible future harm related to privacy violations. Therefore, this situation fits the definition of an AI Hazard, as the AI system's use could plausibly lead to violations of personal data protection and related harms, but no direct harm has occurred so far.
Thumbnail Image

アイルランド当局、ディープシークに国民のデータ処理について情報提供を要求

2025-01-30
読売新聞オンライン
Why's our monitor labelling this an incident or hazard?
The event involves an AI system developed by DeepSeek, and the authorities are investigating its data processing practices due to concerns about personal data protection. However, there is no indication that any harm has yet occurred or that the AI system has malfunctioned or been misused. The article describes a regulatory oversight action and information request, which is a governance response to potential issues but does not report an actual incident or realized harm. Therefore, this qualifies as Complementary Information, as it provides context and updates on governance and oversight related to AI systems but does not describe an AI Incident or AI Hazard.
Thumbnail Image

ディープシークAIは間違いだらけ、正答率17%との調査結果も...米の格付け機関「中国の代弁者」

2025-01-31
読売新聞オンライン
Why's our monitor labelling this an incident or hazard?
DeepSeek AI is an AI system generating content with a very low accuracy rate, spreading misinformation and biased narratives, which harms communities by influencing public opinion with false information. Additionally, the system's data handling practices pose risks to user privacy and rights, with regulatory investigations underway. These factors meet the criteria for an AI Incident because the AI system's use has directly led to realized harms (misinformation and privacy violations).
Thumbnail Image

利用情報は中国法令適用 ディープシークで留意点 -- 個情委:時事ドットコム

2025-02-03
時事ドットコム
Why's our monitor labelling this an incident or hazard?
The article involves a generative AI system developed by a Chinese company and discusses the implications of data handling and legal jurisdiction, which could plausibly lead to violations of privacy rights and surveillance-related harms. However, the article does not report any actual harm or incident occurring yet, but rather warns about potential risks and advises caution. Therefore, this qualifies as an AI Hazard, as the use or development of the AI system could plausibly lead to harm related to personal data privacy and surveillance under Chinese law.
Thumbnail Image

ディープシーク使用を制限 公的機関で -- 台湾:時事ドットコム

2025-01-31
時事ドットコム
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (DeepSeek's generative AI products) and concerns about data transfer leading to risks to national security, which is a form of harm to critical infrastructure or state security. However, the article describes a preventive measure restricting use to avoid harm rather than an actual harm occurring. Therefore, this is an AI Hazard, as the AI system's use could plausibly lead to harm, but no incident has yet occurred.
Thumbnail Image

ディープシーク使用禁止 台湾:時事ドットコム

2025-02-03
時事ドットコム
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (a generative AI service) and a regulatory action restricting its use due to cybersecurity concerns. There is no indication that harm has occurred yet, but the ban is a preventive measure to avoid potential risks. This fits the definition of an AI Hazard, as the development or use of the AI system could plausibly lead to harm (cybersecurity risks) if unrestricted. The event is about governance response and risk mitigation rather than an incident or realized harm.
Thumbnail Image

中国AIディープシーク、各国・地域で利用制限広がる 中国政府への情報漏洩に懸念

2025-02-03
産経ニュース
Why's our monitor labelling this an incident or hazard?
The event involves a generative AI system (DeepSeek) whose use is being restricted by various governments due to concerns about information leakage and security risks. The AI system's involvement is clear, and the concerns relate to plausible future harm (information leaks to a foreign government, which could violate privacy and security rights). No actual harm or incident has been reported yet, only precautionary restrictions and investigations. Hence, this fits the definition of an AI Hazard, as the AI system's use could plausibly lead to an AI Incident but has not yet done so.
Thumbnail Image

強まるDeepSeek包囲網、「数百社」が使用制限-中国政府への流出懸念

2025-01-30
Bloomberg.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek's AI model) whose use is being restricted due to concerns about data privacy and potential unauthorized data sharing with the Chinese government. Although no direct harm has been reported, the potential for data leakage and privacy violations constitutes a credible risk of harm to users' rights. This fits the definition of an AI Hazard, as the development and use of the AI system could plausibly lead to violations of human rights or privacy breaches. The article does not describe an actual incident of harm but focuses on the potential risks and regulatory responses, so it is not an AI Incident or Complementary Information. It is not unrelated because it clearly involves an AI system and associated risks.
Thumbnail Image

米、DeepSeekがエヌビディア半導体をシンガポール経由で入手か調査

2025-01-31
Bloomberg.com
Why's our monitor labelling this an incident or hazard?
The event involves the use and acquisition of AI-related hardware (semiconductors used for AI tasks) by an AI startup, DeepSeek. The investigation is about potential violation of U.S. regulations designed to control the distribution of AI technology. While no direct harm is reported as having occurred yet, the potential for regulatory breach and unauthorized technology transfer to China represents a plausible risk of harm, such as undermining export controls and national security. Therefore, this situation qualifies as an AI Hazard because it plausibly could lead to an AI Incident if the technology is used in ways that violate laws or cause harm. There is no indication that harm has already occurred, so it is not an AI Incident. It is not merely complementary information because the main focus is on the investigation of potential regulatory evasion and its implications, not on responses or ecosystem updates. It is not unrelated because it clearly involves AI systems and their hardware.
Thumbnail Image

台湾、政府機関にディープシーク利用禁止要請 安全性を懸念

2025-01-31
ニューズウィーク日本版 オフィシャルサイト
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (DeepSeek's AI service) and concerns about its use by government agencies. The advisory is based on potential security risks, including data leakage and cross-border data transfer, which could plausibly lead to harm such as violations of privacy or national security breaches. Since no actual harm has been reported but there is a credible risk, this event qualifies as an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

台湾、中国AI「ディープシーク」の公的機関での使用を禁止=行政院長 - フォーカス台湾

2025-02-03
フォーカス台湾 - 中央社日本語版
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (DeepSeek's generative AI) and concerns about its development and use leading to potential harms such as copyright infringement, biased data (censorship), information security risks, and privacy violations. However, the article does not report any actual harm or incident caused by the AI system but rather a government decision to prohibit its use in public agencies to prevent such harms. This fits the definition of Complementary Information because it details a governance response to potential AI-related risks and provides context on regulatory measures, rather than reporting a realized AI Incident or an imminent AI Hazard.
Thumbnail Image

中国・ディープシーク「ダウンロードしないで」小野寺政調会長が国会質問で呼び掛け | お知らせ | ニュース | 自由民主党

2025-02-03
自由民主党
Why's our monitor labelling this an incident or hazard?
The article describes a generative AI system whose use has already resulted in the dissemination of false or misleading information, which constitutes harm to communities and potentially violates rights to accurate information. The AI system's biased responses on territorial and political issues demonstrate direct involvement in causing misinformation harm. Therefore, this qualifies as an AI Incident due to realized harm linked to the AI system's outputs.
Thumbnail Image

公的機関などで中国製AIの使用を制限 情報セキュリティー上の懸念で/台湾 - フォーカス台湾

2025-02-01
フォーカス台湾 - 中央社日本語版
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions a generative AI system developed by a Chinese company and the official restriction on its use in public and critical infrastructure settings due to concerns about data security and potential information leakage. Although no actual harm is reported, the concerns about risks to national information security and data leakage constitute a plausible future harm scenario. Therefore, this event qualifies as an AI Hazard because the AI system's use could plausibly lead to an AI Incident involving harm to critical infrastructure or violation of confidentiality rights.
Thumbnail Image

台湾当局 「ディープシーク」の生成AI、公的機関に使用禁止を求める|日テレNEWS NNN

2025-02-01
日テレNEWS NNN
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions a generative AI system developed by a Chinese company and the Taiwanese government's request to ban its use in public institutions due to security concerns. The concern is about potential information leakage and threats to information safety, which could harm critical infrastructure or public information security. Since no actual harm has occurred yet but there is a credible risk leading to harm, this fits the definition of an AI Hazard. The event is not an AI Incident because harm has not materialized, nor is it Complementary Information or Unrelated.
Thumbnail Image

이탈리아서 中 AI '딥시크' 신규 다운로드 차단

2025-01-30
연합뉴스TV
Why's our monitor labelling this an incident or hazard?
An AI system (DeepSeek) is involved, and the event stems from its use and data processing practices. However, no actual harm has been reported; rather, the regulatory authority's intervention suggests potential risks related to privacy and data protection. Since no direct or indirect harm has occurred yet, but there is a plausible risk leading to regulatory blocking of downloads, this qualifies as an AI Hazard. It is not an AI Incident because no harm has materialized, nor is it Complementary Information or Unrelated.
Thumbnail Image

이탈리아서 中 AI '딥시크' 신규 다운로드 차단돼 | 연합뉴스

2025-01-29
연합뉴스
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (DeepSeek app) and regulatory actions due to privacy concerns, which relate to legal obligations protecting user rights. However, no actual harm or incident caused by the AI system is reported; the blocking is a preventive regulatory measure. The event is about governance response and regulatory scrutiny, fitting the definition of Complementary Information rather than an AI Incident or AI Hazard. It enhances understanding of AI ecosystem governance and privacy issues but does not describe a new harm or plausible future harm caused by the AI system.
Thumbnail Image

개인정보 중국에 털릴라...'딥시크' 신규 다운로드 차단한 '이 나라'

2025-01-30
아시아경제
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (DeeSeek) and concerns about its data handling practices. The Italian data protection authority's actions are regulatory responses to potential privacy violations, which relate to human rights protection. However, the article does not report any realized harm or incident caused by the AI system, nor does it describe a plausible imminent harm event. Instead, it focuses on the regulatory inquiry and download blocking as precautionary measures. This fits the definition of Complementary Information, as it provides context on governance responses to AI privacy risks without describing a new AI Incident or AI Hazard.
Thumbnail Image

이탈리아서 AI 딥시크 신규 다운로드 차단돼

2025-01-29
Wow TV
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek) and concerns about its data handling practices, leading to a regulatory action blocking new downloads in Italy. However, no direct or indirect harm from the AI system's use or malfunction is reported. The blocking is a preventive measure to avoid potential privacy violations, which could be considered a plausible future harm. Therefore, this qualifies as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Italia trimite către DeepSeek prima solicitare legată de modul în care sunt utilizate datele utilizatorilor: "Datele a milioane de italieni sunt în pericol"

2025-01-29
G4Media.ro
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek) whose development and use raise concerns about personal data protection and intellectual property rights violations. Although no direct harm has yet been reported, the regulatory investigation and expressed concerns indicate a plausible risk of harm to users' privacy and rights. Therefore, this situation constitutes an AI Hazard, as the AI system's use could plausibly lead to violations of data protection laws and intellectual property rights, but no confirmed incident of harm has occurred yet.
Thumbnail Image

DeepSeek a fost blocat în Italia, Garantul de confidențialitate deschide o investigație asupra aplicației AI chinezești

2025-01-31
rador.ro
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (the DeepSeek chatbot) and regulatory action taken due to concerns about data privacy and user data processing. While no direct harm is reported, the blocking and investigation indicate potential or ongoing violations of data protection rights, which fall under violations of human rights or legal obligations. Since the investigation and blocking are responses to potential or ongoing harm, this qualifies as an AI Incident due to the realized or ongoing violation of privacy rights linked to the AI system's use.
Thumbnail Image

Italia blochează DeepSeek din cauza preocupărilor legate de confidențialitate

2025-01-31
G4Media.ro
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (DeepSeek's chatbot) whose use has led to regulatory intervention due to privacy violations and non-compliance with legal requests. This constitutes a violation of applicable law protecting fundamental rights, specifically data privacy, which fits the definition of an AI Incident under category (c). The blocking and investigation indicate that harm or breach has occurred or is ongoing, not just a potential risk. Therefore, this is classified as an AI Incident.
Thumbnail Image

Occidentul începe să interzică la scară largă DeepSeek. Sute de companii restricționează accesul, în Italia nu mai este disponibilă

2025-01-31
Observator News
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system whose use has directly led to violations of data privacy laws (GDPR) and potential breaches of citizens' rights to data protection, constituting a violation of human rights and legal obligations. The Italian authority's suspension and ongoing investigations in other countries confirm that harm has occurred or is occurring. The security breach exposing sensitive internal data further supports the presence of realized harm. The event is not merely a potential risk or a complementary update but a concrete incident involving harm caused by the AI system's use and data management practices.
Thumbnail Image

Italia. Autoritatea pentru Protecția Datelor a decis blocarea modelului chinez de AI, DeepSeek, din cauza temerilor legate de securitatea utilizatorilor. - Biziday

2025-02-01
Biziday
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system (a chatbot AI model) involved in processing user data. The Italian authority's decision to block it follows evidence that the AI system published sensitive user data online, constituting a direct harm to users' privacy and data security, which falls under violations of rights and harm to individuals. The refusal of the company to comply with data protection requests and the subsequent investigation further support the classification as an AI Incident. The involvement of other governments in banning or restricting DeepSeek due to security concerns reinforces the recognition of actual harm caused or ongoing risks. Therefore, this event is best classified as an AI Incident.
Thumbnail Image

Italia a blocat DeepSeek, noul asistent chinez AI, din cauza îngrijorărilor privind securitatea datelor personale - HotNews.ro

2025-01-31
HotNews.ro
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system (a chatbot AI assistant). The Italian authority's blocking and investigation stem from concerns about data privacy and non-compliance with legal frameworks, which relates to the AI system's use and development. However, no actual harm or incident is reported; the action is preventive and regulatory. This fits the definition of Complementary Information, as it provides context on governance responses to AI systems and their compliance with data protection laws, rather than describing an AI Incident or AI Hazard.
Thumbnail Image

Italia a blocat aplicația chineză de inteligență artificială DeepSeek | TRT Romanian

2025-01-31
trt.net.tr
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek chatbot) whose use is under scrutiny for potential violations of personal data protection laws, which relate to fundamental rights. The blocking and investigation are preventive measures to protect users' rights, indicating a plausible risk of harm but no confirmed harm yet. Therefore, this qualifies as an AI Hazard because the AI system's use could plausibly lead to violations of rights if unregulated, but no direct or indirect harm has been reported as having occurred at this time.
Thumbnail Image

Italia blochează aplicația chineză DeepSeek și începe o investigație privind furtul de date

2025-01-31
digi24.ro
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system (a chatbot) whose operation involves processing personal data. The Italian authority's action to block the system and investigate for data theft indicates that the AI system's use has led to or is suspected of leading to violations of legal protections for personal data, which falls under violations of human rights or breach of applicable law. Therefore, this event qualifies as an AI Incident due to realized or strongly suspected harm related to data privacy and legal compliance.
Thumbnail Image

Italia a blocat DeepSeek

2025-01-31
Profit.ro
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek chatbot) whose use has led to regulatory action due to non-compliance with data protection laws, which are designed to protect fundamental rights. Although no direct harm such as injury or property damage is reported, the refusal to comply with legal frameworks and the blocking of data processing indicate a violation of obligations under applicable law intended to protect fundamental rights. Therefore, this qualifies as an AI Incident under the category of violations of human rights or breach of legal obligations. The event is not merely a product launch or general news, but a concrete regulatory action in response to the AI system's use and legal non-compliance.
Thumbnail Image

Autoritatea italiană cere explicaţii de la DeepSeek privind protecţia datelor

2025-01-29
Mediafax.ro
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (DeepSeek) that processes user data, and the data protection authority is investigating potential violations of data protection laws. While no direct harm is reported, the concerns about data exposure and lack of transparency represent a plausible risk of harm to individuals' privacy and rights. Therefore, this situation constitutes an AI Hazard, as the development and use of the AI system could plausibly lead to an AI Incident if the issues are not resolved.
Thumbnail Image

Autoritate europeană de protecție a datelor: Prima cerere de informații adresată chinezilor de la DeepSeek

2025-01-29
StartupCafe.ro
Why's our monitor labelling this an incident or hazard?
The event centers on regulatory scrutiny of an AI system's data processing practices and potential risks to personal data privacy under GDPR. While the AI system's use and data handling raise concerns about possible violations of data protection rights, no direct or indirect harm has been reported or confirmed. The request for information is a governance and oversight response to potential risks, not a report of realized harm. Therefore, this qualifies as Complementary Information, as it provides important context and updates on AI governance and risk assessment without describing an AI Incident or Hazard.
Thumbnail Image

Autoritatea Italiană de Protecție a Datelor solicită informații de la Deepseek

2025-01-30
JURIDICE.ro
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek chatbot) and concerns the use and processing of personal data, which is relevant to AI system development and use. However, the article describes a regulatory request for information due to potential risks, not an actual incident or harm caused by the AI system. There is no indication that any harm has occurred or that the AI system malfunctioned. The focus is on gathering information to assess potential risks and compliance, which fits the definition of Complementary Information as it provides context and updates related to AI governance and oversight without reporting a new incident or hazard.
Thumbnail Image

Първа забрана на DeepSeek в Европа - Mediapool.bg

2025-01-31
Mediapool.bg
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system (a Chinese AI chatbot). The event involves the use of this AI system and concerns data privacy and protection, which relates to violations of applicable law protecting fundamental rights (privacy). The blocking of the AI system due to lack of transparency about personal data use indicates a regulatory intervention to prevent potential or ongoing harm to users' rights. However, the article does not report actual realized harm but rather a preventive measure and investigation. Therefore, this event is best classified as Complementary Information, as it provides an update on governance and regulatory response to AI use and potential risks, rather than describing a concrete AI Incident or an AI Hazard with realized or imminent harm.
Thumbnail Image

Италиански регулатор блокира AI приложението на DeepSeek

2025-01-31
Investor.bg
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek's AI chatbot) whose use has raised concerns about violations of data protection and privacy rights, which are fundamental rights protected by law. The regulator's order to block the application in Italy is a direct response to these concerns, indicating that the AI system's use has led to or is causing a breach of obligations under applicable law intended to protect fundamental rights. Therefore, this constitutes an AI Incident due to the realized harm (or ongoing violation) related to privacy and data protection rights.
Thumbnail Image

DeepSeek е блокиран в магазините за приложения на Apple и Google за Италия

2025-01-29
Actualno.com
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system (an AI application) whose use involves processing personal data. The blocking and investigation stem from concerns about its data handling practices, which could lead to violations of data protection laws and users' rights. However, the article does not report any realized harm or incident caused by the AI system, only regulatory scrutiny and preventive measures. Therefore, this event is best classified as Complementary Information, as it provides context on governance and regulatory responses to AI-related privacy concerns without describing a direct or indirect AI Incident or a plausible future AI Hazard.
Thumbnail Image

Италия блокира чатбота DeepSeek - Евроком

2025-01-30
Евроком - Информация Без Граници
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system (DeepSeek chatbot) and its use in processing personal data. The regulatory authority's intervention and investigation indicate concerns about possible violations of data protection laws, which fall under violations of human rights and legal obligations. Since no actual harm or confirmed violation has been reported yet, but there is a credible risk that the AI system's use could lead to such harm, this situation qualifies as an AI Hazard rather than an AI Incident. The article focuses on the potential risks and regulatory response rather than a realized incident of harm.
Thumbnail Image

Китайският чатбот DeepSeek е блокиран в Италия

2025-01-31
dnesplus.bg
Why's our monitor labelling this an incident or hazard?
The Italian data protection authority's blocking of DeepSeek is a preventive regulatory measure addressing potential privacy risks from the AI chatbot's data processing practices. There is no indication that harm has already occurred, only that the AI system's use could plausibly lead to violations of data protection rights if unregulated. Therefore, this event fits the definition of an AI Hazard, as it involves plausible future harm from the AI system's use, prompting official intervention to mitigate risk.
Thumbnail Image

South Korea Investigates DeepSeek Over User Data Privacy Concerns - EconoTimes

2025-01-31
EconoTimes
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (DeepSeek) and concerns about its data privacy practices, which relate to potential violations of personal information protection laws. However, the event is about an investigation and regulatory inquiry without any confirmed harm or breach at this stage. Therefore, it represents a plausible risk of harm (privacy violations) but no realized incident. This fits the definition of an AI Hazard, as the AI system's use could plausibly lead to an AI Incident if privacy violations are confirmed, but no direct or indirect harm has yet occurred according to the article.
Thumbnail Image

Italy blocks access to Chinese AI app DeepSeek to protect users' data

2025-01-31
The Mirror
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek chatbot) whose use has raised concerns about violations of data protection laws, which are part of legal obligations protecting fundamental rights. The blocking of access and investigation indicate that the AI system's use has directly led to a potential or actual breach of user data rights. Since the article describes a regulatory intervention due to these concerns, and the harm relates to violations of rights, this qualifies as an AI Incident.
Thumbnail Image

Italy Blocks Access To Chinese AI App DeepSeek Over Privacy Concerns - UrduPoint

2025-01-31
UrduPoint
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (DeepSeek chatbot) and concerns about its data collection practices leading to a regulatory intervention to prevent potential privacy harm. Although no direct harm is reported, the blocking action is a preventive measure addressing plausible risks to user data privacy, which is a fundamental right. Therefore, this event represents a governance response to a potential AI-related harm, fitting the category of Complementary Information rather than an incident or hazard.
Thumbnail Image

Italy becomes first country to ban DeepSeek, country cites privacy concerns over personal data usage

2025-01-31
Financialexpress
Why's our monitor labelling this an incident or hazard?
An AI system (DeepSeek, an AI-powered chatbot) is explicitly involved. The event stems from the use and development of this AI system, specifically its data practices. However, the article does not report any realized harm or violation of rights; rather, it describes a precautionary regulatory measure and an ongoing investigation due to insufficient information about data usage. Therefore, this event represents a plausible risk of harm (privacy violations) that could lead to an AI Incident if confirmed, but as of now, it is a potential issue. The main focus is on the regulatory blocking and investigation, indicating a credible risk but no confirmed incident. Hence, it qualifies as an AI Hazard.
Thumbnail Image

What is the Garante, the Italian privacy watchdog taking on DeepSeek?

2025-01-29
Reuters
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek) and the regulatory authority's inquiry into its data practices, which relates to the development or use of the AI system. However, the article does not report any actual harm or violation caused by DeepSeek, only a request for information. This fits the definition of Complementary Information, as it provides context on governance and oversight related to AI systems but does not describe an AI Incident or AI Hazard.
Thumbnail Image

Italy's regulator blocks Chinese AI app DeepSeek on data protection concerns

2025-01-31
Irish Independent
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (DeepSeek's chatbot) whose use raised data protection concerns. The regulator's intervention to block the chatbot is a response to potential violations of privacy rights, which are part of human rights and legal obligations. Although no direct harm is reported yet, the unresolved privacy issues and regulatory blocking indicate a plausible risk of harm to individuals' rights if the AI system continued operating without compliance. Therefore, this qualifies as an AI Hazard because the AI system's use could plausibly lead to violations of rights, but no actual harm has been reported as having occurred yet.
Thumbnail Image

Italy bans China's DeepSeek AI over data use concerns - UPI.com

2025-01-31
UPI
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek AI chatbot) whose use has raised regulatory concerns related to data protection and privacy. The authorities' intervention to block the AI system's data processing indicates a response to potential violations of legal obligations protecting fundamental rights (privacy). However, the article does not report any realized harm but rather a regulatory action and investigation, indicating a potential risk or hazard rather than an incident with realized harm.
Thumbnail Image

Italy's privacy regulator goes after DeepSeek

2025-01-29
POLITICO
Why's our monitor labelling this an incident or hazard?
The article describes a regulatory inquiry into DeepSeek's data practices, focusing on compliance with data protection laws. There is no mention of any realized harm or incident caused by the AI system, nor any plausible future harm explicitly stated. The event is about oversight and ensuring legal compliance, which fits the definition of Complementary Information as it provides context and updates on governance responses related to AI systems.
Thumbnail Image

DeepSeek blocked by some app stores in Italy to protect users'...

2025-01-30
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system (a chatbot) whose use has raised data protection concerns. The blocking action and investigation are responses to potential violations of data privacy laws, which relate to violations of applicable law protecting fundamental rights. Although no direct harm is explicitly reported, the authority's intervention indicates a risk of harm to users' privacy rights. Since the event describes regulatory action and investigation rather than a realized harm incident, it is best classified as Complementary Information providing context on governance and societal response to AI-related risks.
Thumbnail Image

South Korea Investigates DeepSeek's Handling of User Data - EconoTimes

2025-02-01
EconoTimes
Why's our monitor labelling this an incident or hazard?
The article focuses on a formal inquiry into the AI system's data management to ensure compliance with privacy regulations. There is no mention of actual harm, violation, or incident caused by the AI system's development, use, or malfunction. The event is about regulatory scrutiny and transparency efforts, which fits the definition of Complementary Information as it provides context and updates on governance responses to AI-related privacy concerns without reporting a specific AI Incident or AI Hazard.
Thumbnail Image

法国报纸摘要 - 台湾已为获得美国支持付出代价做好准备

2025-01-31
ار.اف.ای - RFI
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (DeepSeek chatbot) and its removal due to data protection concerns, which implies regulatory scrutiny but no realized harm or incident caused by the AI system. The potential risks to user data and privacy are concerns but do not constitute a realized AI Incident or a plausible AI Hazard as per the definitions. The geopolitical analysis involving AI cooperation between Taiwan and the US is contextual and does not describe any AI-related harm or plausible future harm. Hence, the content fits the definition of Complementary Information, providing updates and context on AI system regulation and geopolitical AI developments without reporting a new AI Incident or AI Hazard.
Thumbnail Image

意大利监管机构封锁DeepSeek 保护个资

2025-01-30
The Epoch Times
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system providing chatbot services. The Italian regulator's action to block the app due to insufficient transparency about personal data usage indicates concerns about potential violations of data protection laws, which protect fundamental rights. The event involves the use of an AI system and its development/use leading to regulatory intervention to prevent harm to individuals' data privacy rights. Although no explicit harm is reported as having occurred, the regulatory blocking and investigations reflect a response to potential or ongoing violations of rights. Given the regulatory action and the focus on protecting personal data rights, this qualifies as an AI Incident involving violations of rights under applicable law.
Thumbnail Image

意大利数据监管机构限制DeepSeek并展开调查

2025-01-31
美国之音
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek's AI chatbot) whose use of personal data is under regulatory scrutiny. The Italian authority's restriction and investigation stem from concerns about data privacy and compliance with applicable laws, which relate to potential violations of fundamental rights (privacy). Since no actual harm has been reported but there is a credible risk of legal violations and privacy harm, this qualifies as an AI Hazard rather than an AI Incident. The event is not merely complementary information because it reports a concrete regulatory action and investigation, indicating plausible future harm if issues are not resolved.
Thumbnail Image

意大利监管机构称已封锁DeepSeek

2025-01-31
東方網 馬來西亞東方日報
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek, an AI application with chatbot services) whose use of personal data lacks transparency, leading to regulatory action to protect users' data privacy. This constitutes a violation of data protection laws, which are part of applicable legal frameworks protecting fundamental rights. Since the AI system's use has directly led to a breach of obligations under applicable law (data protection regulations), this qualifies as an AI Incident under the definition of violations of human rights or breach of legal obligations.
Thumbnail Image

澳网安公司:DeepSeek受控于中共 政府设备需封禁 | deepseek | 澳洲政府 | 网络安全公司 | 大纪元

2025-01-31
The Epoch Times
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek) whose use and data collection practices are linked to potential privacy breaches and influence operations aligned with CCP interests. The cybersecurity company explicitly recommends banning the AI system on government and critical infrastructure devices due to these risks. Although no actual harm has been reported yet, the credible warnings and recommendations indicate a plausible risk of harm to privacy, security, and democratic institutions. Therefore, this qualifies as an AI Hazard because the AI system's use could plausibly lead to an AI Incident involving violations of privacy and security, but no direct harm has been documented at this time.
Thumbnail Image

DeepSeek blocked by some app stores in Italy to protect users' personal data

2025-01-30
FOX31 Denver
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system (a chatbot) whose use involves processing personal data. The blocking action and investigation by the data protection authority indicate concerns about potential violations of data protection laws, which relate to users' fundamental rights to privacy and data security. Although no explicit harm has been reported yet, the authority's intervention reflects a response to potential or ongoing violations of legal obligations protecting personal data, which falls under violations of human rights or legal obligations. Therefore, this event qualifies as an AI Incident due to the direct involvement of an AI system and the regulatory action addressing harm or risk to users' rights.
Thumbnail Image

Italy restricted access to DeepSeek

2025-01-31
Главные новости Казахстана - Tengrinews.kz
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek chatbot) whose use has led to regulatory action to protect user data, indicating concerns about violations of rights under applicable law. Although no direct harm is reported, the restriction and investigation indicate a response to potential or ongoing breaches of data protection rights, which fall under violations of human rights or legal obligations. This fits the definition of Complementary Information because the main focus is on regulatory response and investigation rather than a specific realized harm incident or a plausible future hazard.
Thumbnail Image

Italy Blocks Chinese AI Model DeepSeek Over Data Privacy Concerns - Space/Science news - Tasnim News Agency

2025-01-31
خبرگزاری تسنیم | Tasnim
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system (an AI assistant chatbot) that collects and processes personal data. The Italian regulator's action to block the app stems from its failure to provide adequate information about data practices, which is a violation of data privacy laws protecting fundamental rights. The Australian cybersecurity advisory further indicates potential misuse aligned with political objectives and data control by a foreign government, raising national security concerns. While no direct harm has been reported, the event clearly involves plausible risks of harm to privacy and security, fitting the definition of an AI Hazard rather than an AI Incident, as the harms are potential and regulatory action is preventive.
Thumbnail Image

Italy blocks access to Chinese AI app DeepSeek over privacy concerns

2025-01-31
Anadolu Agency
Why's our monitor labelling this an incident or hazard?
The AI system (DeepSeek) is explicitly mentioned and is involved in processing personal data of users. The blocking and investigation by the privacy watchdog indicate concerns about violations of privacy rights, which fall under violations of human rights or legal obligations protecting fundamental rights. Although no direct harm is reported yet, the event reflects a regulatory response to potential or ongoing harm related to data privacy. This fits the definition of Complementary Information because the main focus is on the regulatory action and investigation as a response to privacy concerns, rather than a confirmed AI Incident causing realized harm or an AI Hazard indicating plausible future harm.
Thumbnail Image

Business - Italy blocks DeepSeek over data privacy concerns

2025-01-30
France 24
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek chatbot) and concerns about its data privacy practices, which could plausibly lead to violations of data protection laws and personal rights. Since no actual harm or violation has been confirmed or reported, and the main focus is on regulatory scrutiny and potential risks, this fits the definition of an AI Hazard rather than an Incident. The mention of auctioned confiscated goods is unrelated to the AI context.
Thumbnail Image

Italy's regulator blocks Chinese AI app DeepSeek on data protection

2025-01-30
The Straits Times
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (DeepSeek, an AI assistant app) and concerns its use and handling of personal data. However, no actual harm has been reported yet; the authority's action is preventive to protect users' data privacy and ensure compliance with data protection laws. This situation represents a plausible risk of harm related to data privacy violations if the AI system's data practices are not transparent or compliant. Therefore, it qualifies as an AI Hazard rather than an AI Incident, as the harm is potential and the authority is acting to prevent it.
Thumbnail Image

Italy blocks Chinese AI app DeepSeek, opens investigation into data use

2025-01-30
South China Morning Post
Why's our monitor labelling this an incident or hazard?
An AI system (DeepSeek chatbot) is explicitly involved, and the event concerns its use and data processing practices. While no direct harm has been reported yet, the investigation and blocking indicate potential risks related to violations of data protection and privacy rights, which fall under violations of applicable law protecting fundamental rights. Since the harm is not yet realized but plausible, this constitutes an AI Hazard rather than an AI Incident. The article focuses on regulatory action and investigation rather than a realized harm or incident.
Thumbnail Image

Italy Data Watchdog Restricts DeepSeek, Opens Probe - UrduPoint

2025-01-30
UrduPoint
Why's our monitor labelling this an incident or hazard?
The AI system (DeepSeek chatbot) is explicitly mentioned and is involved in processing personal data of Italian users. The Italian data watchdog has restricted its data processing and opened an investigation due to insufficient compliance with data protection regulations, indicating a breach of legal obligations protecting fundamental rights. This constitutes a violation of rights (privacy/data protection), which fits the definition of an AI Incident. The event is not merely a potential risk but an active restriction and investigation following observed non-compliance, indicating realized harm or breach rather than a plausible future harm or complementary information.
Thumbnail Image

Italy's Garante Blocks DeepSeek Over Data Privacy Concerns | Technology

2025-01-30
Devdiscourse
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek) whose use has raised concerns about violations of data privacy rights, a fundamental right protected under applicable law. The blocking and investigation indicate that the AI system's development or use is linked to potential or actual breaches of legal obligations regarding personal data. This constitutes a violation of rights (c) under the AI Incident definition, as the AI system's use has directly led to regulatory action due to non-compliance with data protection laws.
Thumbnail Image

Italy Blocks DeepSeek: A Data Protection Milestone | Technology

2025-01-30
Devdiscourse
Why's our monitor labelling this an incident or hazard?
The article describes a regulatory intervention against an AI system due to insufficient transparency and potential data protection violations, which could lead to harm if unresolved. However, no actual harm or incident has been reported yet. Therefore, this qualifies as an AI Hazard because the AI system's use could plausibly lead to violations of rights or legal breaches, but no direct or indirect harm has been confirmed at this stage.
Thumbnail Image

Italy's regulator blocks Chinese AI app DeepSeek on data protection

2025-01-30
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
The article describes regulatory action taken against an AI system (DeepSeek) because of concerns about its handling of personal data and lack of transparency. While no direct harm has been reported, the authority's intervention indicates potential risks related to privacy violations and legal non-compliance. Since the AI system's use could plausibly lead to violations of data protection rights if unaddressed, this constitutes an AI Hazard rather than an Incident, as no realized harm is described yet.
Thumbnail Image

Italy Blocks DeepSeek: Data Privacy Concerns Rise | Technology

2025-01-30
Devdiscourse
Why's our monitor labelling this an incident or hazard?
The event concerns the use of an AI system (DeepSeek) and its handling of personal data, which relates to potential violations of data privacy rights. However, the article does not report any realized harm or incident caused by the AI system; rather, it describes a regulatory action and an ongoing investigation to prevent possible harm. Therefore, this is a governance and regulatory response providing complementary information about AI risks and oversight, not an AI Incident or Hazard.
Thumbnail Image

Italy Blocks Chinese AI Model Over Privacy Concerns | Technology

2025-01-30
Devdiscourse
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek) whose use raised concerns about personal data usage, implicating privacy rights. The blocking action is a regulatory response to prevent potential or ongoing violations of data protection laws, which are part of fundamental rights. Although no direct harm is reported as having occurred, the blockade is a preventive measure against plausible harm related to privacy violations. Therefore, this event is best classified as Complementary Information, as it reports a governance response to an AI-related privacy concern rather than a realized incident or a hazard with imminent risk.
Thumbnail Image

DeepSeek sotto la lente del Garante per la privacy

2025-01-28
informazione.it
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (DeepSeek chatbot) and concerns its use and data processing practices. However, there is no indication that any harm has yet occurred or that the AI system has malfunctioned or been misused to cause harm. The investigation and information request by the privacy authority represent a governance and oversight response to potential risks, but no realized harm or incident is reported. Therefore, this is Complementary Information providing context and updates on regulatory scrutiny and privacy concerns related to an AI system.
Thumbnail Image

Italy's privacy watchdog blocks Chinese AI app DeepSeek By Reuters

2025-01-30
Investing.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (DeepSeek, an AI chatbot) and concerns the use of personal data, which relates to privacy rights and legal obligations. The blocking action and investigation by the privacy watchdog indicate a regulatory response to potential violations of data protection laws. However, the article does not report any realized harm or incident caused by the AI system, only a preventive measure due to insufficient information. Therefore, this is best classified as Complementary Information, as it provides context on governance and regulatory responses to AI systems rather than describing an AI Incident or AI Hazard.
Thumbnail Image

Italy Bans China's DeepSeek AI Chatbot Over Privacy Fears - Decrypt

2025-01-31
Decrypt
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (DeepSeek chatbot) whose use has led to regulatory action due to alleged violations of data privacy laws, which are legal protections of fundamental rights. The Italian data protection authority's order to block the chatbot in Italy is a direct consequence of the AI system's data collection practices that failed to comply with legal frameworks. This constitutes a violation of rights (privacy rights) caused by the AI system's use, meeting the criteria for an AI Incident. The event does not merely describe potential future harm or general AI news but reports a concrete regulatory response to realized or ongoing harm related to the AI system's operation.
Thumbnail Image

DeepSeek App Blocked In Italy After Privacy Complaint Under EU's GDPR, Irish Data Protection Commission Also Investigating

2025-01-31
Techdirt
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI chatbot system whose use and data processing practices have led to regulatory actions due to violations of GDPR, a legal framework protecting fundamental rights. The blocking of the app and investigation by the Italian authority directly result from the AI system's handling of personal data, which constitutes a breach of applicable law intended to protect fundamental rights. The exposure of sensitive data and lack of transparency about training data further underline the harm caused. Therefore, this event meets the criteria of an AI Incident because the AI system's use has directly led to violations of legal obligations and privacy harms.
Thumbnail Image

Italy bans DeepSeek AI

2025-01-31
Euro Weekly News
Why's our monitor labelling this an incident or hazard?
DeepSeek AI is an AI system whose data processing practices have been found to potentially violate GDPR, a legal framework protecting fundamental rights related to personal data. The ban and investigations by multiple European data protection authorities indicate that the AI's use has directly led to a breach of obligations under applicable law. This constitutes an AI Incident as per the definition, since the AI system's use has caused violations of human rights/legal obligations (privacy rights).
Thumbnail Image

A Deep-See On DeepSeek: How Italy's Ban Might Shape AI Oversight

2025-01-31
Forbes
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (DeepSeek) whose data processing practices have led to regulatory intervention due to privacy and data protection concerns, which constitute violations of fundamental rights under applicable law (GDPR). The ban and investigation are direct consequences of the AI system's use and data handling, indicating realized harm rather than just potential risk. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to a breach of legal obligations protecting user rights. The article also mentions similar past incidents (ChatGPT ban) and ongoing investigations, reinforcing the classification as an incident rather than a hazard or complementary information.
Thumbnail Image

Your DeepSeek Chats May Have Been Exposed Online

2025-01-30
Lifehacker
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI chat application, thus involving an AI system. The exposure of sensitive user data, including chat histories and encryption keys, due to a security vulnerability constitutes a direct harm to users' privacy and data security, which falls under violations of rights and harm to individuals. Although no confirmed unauthorized access is reported, the exposure itself is a realized harm. Therefore, this event qualifies as an AI Incident due to the direct or indirect harm caused by the AI system's use and its security failure.
Thumbnail Image

World News | DeepSeek Blocked by Some App Stores in Italy to Protect Users' Personal Data | LatestLY

2025-01-30
LatestLY
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI chatbot application, thus involving an AI system. The blocking by the data protection authority and the investigation indicate concerns about violations of data protection laws, which relate to violations of rights under applicable law. However, the article does not report actual harm occurring yet but focuses on regulatory intervention to prevent potential harm to users' personal data privacy. Therefore, this event is best classified as Complementary Information, as it details a governance response to potential AI-related privacy issues rather than a realized AI Incident or a plausible future hazard.
Thumbnail Image

International regulators probe how DeepSeek is using data. Is the app safe to use?

2025-01-31
Georgia Public Broadcasting
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system, DeepSeek, which collects and processes user data. The concerns raised by regulators and experts focus on the potential misuse of this data, especially given its storage in China and the possibility of government access. No actual data breaches or misuse have been reported yet, so no direct harm has occurred. However, the plausible risk of harm to privacy, national security, and user rights is credible and significant. Thus, the event fits the definition of an AI Hazard, as it plausibly could lead to an AI Incident if the data were misused or accessed improperly. The article does not describe a realized harm or incident, nor is it merely complementary information or unrelated news.
Thumbnail Image

Italy Bans DeepSeek But Banning AI Model is Harder Than You Think Italy remains firm on their 'no-nonsense' agenda. You will not find DeepSeek AI chatbot on Apple's App Store and Google Store in Italy. How does it go from popularity to banning? The Italian DPA was acting based on a complaint filed by consumer coalition group, Euroconsumers, over DeepSeek's personal data handling practices. DeepSeek has been given 20 days to respond as the watchdog is investigating DeepSeek's storage of user data on servers in China, which violates Euroconsumers data protection laws. What is the actual issue behind? Data privacy: User data is stored in China, raising fears of government access. Cybersecurity: Nation's safety above all technology. The U.S. Navy has warned its personnel against using DeepSeek for both work and personal use to prevent any kind of Cybercrime. Biasness: DeepSeek with its open source and free of cost agenda can be manipulated and censored to give biased information. Nation's safety and security above all. U.S suspects DeepSeek's AI model was trained using U.S. AI models (like ChatGPT) through a technique called "distillation" Distillation is a process where a newer AI model learns from an existing, more powerful AI model. It provides an edge to developers so that they can transfer knowledge without investing a single penny in expensive computing resources. The U.S government consistently evaluates that it might have a harmful effect to national security as well as AI dominance. Is it really banned in Italy? The DeepSeek app is blocked, but Italy users can still download its open-source model and run it locally. They can also access it via Perplexity (third-party platform) which hosts it on servers in the U.S and EU. (outside of china) Banning DeepSeek is difficult because of the following reasons: AI distillation: It's Hard to detect when data is extracted from AI models like ChatGPT. Open-source models can be downloaded freely (that makes their enforcement difficult) What are Netizens saying? It Seems DeepSeek is getting blocked in Italy. Remember they also blocked ChatGPT as well in the beginning for a short amount. Will this trend continue or is this a nothing burger. - Dominik Filkus on his recent tweet on X. Is there any solution? Well, of now. Blocking all Chinese IP addresses can be an instant solution. (But you never know) users could still find ways to bypass these restrictions. Italy. U.S. or another country on their way to go against DeepSeek. DeepSeek app removal is never the 'ultimate' solution. Proper guidelines and law enforcement can be. Will DeepSeek be able to tackle challenges like these? Stay Tuned to learn more!

2025-01-31
TECHi
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek AI chatbot) whose use and data handling practices are under investigation for potential violations of data protection laws and national security risks. The blocking of the app in Italy is a regulatory response to these concerns. However, no actual harm such as injury, rights violations, or disruption has been reported as having occurred yet. The article highlights plausible future harms related to data privacy breaches and national security threats if the AI system is used improperly or without adequate controls. Therefore, this situation fits the definition of an AI Hazard, as it plausibly could lead to an AI Incident but has not yet directly or indirectly caused harm.
Thumbnail Image

Italy Blocks DeepSeek AI Over Data Privacy Concerns

2025-01-31
MEDIANAMA
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (DeepSeek R1) and details how its use and development have led to significant concerns about data privacy violations, lack of transparency in data handling, and security vulnerabilities that expose sensitive information. These concerns constitute violations of legal obligations protecting personal data and privacy, which are fundamental rights. The blocking of the AI model by Italy's data protection authority and the initiation of an investigation indicate that these harms are recognized and materialized or ongoing. Additionally, the exposure of sensitive data and potential for privilege escalation represent direct security harms. Hence, the event meets the criteria for an AI Incident due to realized harm related to rights violations and security risks stemming from the AI system's use and malfunction.
Thumbnail Image

Italian Regulator Blocks DeepSeek Over Personal Data Concerns

2025-01-31
The Epoch Times
Why's our monitor labelling this an incident or hazard?
The article describes regulatory actions taken against an AI system (DeepSeek) due to concerns about personal data handling and legal compliance. Although the AI system is involved and there is a potential for harm related to privacy and data protection rights, the article does not report any actual harm or incident caused by the AI system. Instead, it focuses on the investigation, blocking of access, and requests for information, which are governance and societal responses to potential risks. This fits the definition of Complementary Information, as it updates on ongoing regulatory scrutiny and potential future risks without confirming an AI Incident or AI Hazard.
Thumbnail Image

As Italy and US Navy ban DeepSeek, here's how some countries are reacting

2025-01-31
Europe's Russian gas era ends as Ukraine transit stops
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI chatbot system involved in processing personal data. The Italian authority's action to restrict data processing and investigate indicates concerns about potential violations of data protection laws, which relate to violations of legal obligations protecting fundamental rights. Since the event involves the use of an AI system leading to regulatory intervention due to data privacy concerns, it constitutes an AI Incident under the category of violations of human rights or breach of applicable law protecting fundamental rights. The event reports realized harm or at least regulatory action due to the AI system's use, not just potential future harm or general information, so it is not a hazard or complementary information.
Thumbnail Image

DeepSeek AI blocked by Italian authorities as others member states open probes

2025-01-31
euronews
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (DeepSeek AI) whose data processing practices are under scrutiny by authorities for potential non-compliance with GDPR, which protects fundamental rights related to privacy. While this raises concerns about possible violations of rights, no actual harm or confirmed breach has been reported yet. The event is about regulatory investigations and preventive actions, which align with the definition of Complementary Information as it provides updates on governance responses and ongoing assessments rather than describing a realized AI Incident or a plausible future hazard.
Thumbnail Image

Italy's Garante blocks DeepSeek over data privacy concerns

2025-01-31
Verdict
Why's our monitor labelling this an incident or hazard?
An AI system (DeepSeek) is explicitly involved, and its use has led to regulatory intervention due to concerns about violations of data protection laws, which are part of fundamental rights. The blocking of the app to protect users' personal data indicates a direct response to potential or ongoing violations of privacy rights. This constitutes a violation of human rights or breach of obligations under applicable law intended to protect fundamental rights, thus qualifying as an AI Incident.
Thumbnail Image

Italy bans Deepseek AI for stealing sensitive user data: All details

2025-01-31
Digit
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek AI chatbot) whose data handling practices have raised regulatory concerns leading to a ban in Italy. The AI system's development and use are central to the issue. However, the article does not report any actual harm occurring yet, only potential privacy and cybersecurity risks and regulatory non-compliance. Therefore, this situation fits the definition of an AI Hazard, as the AI system's use could plausibly lead to harm (privacy violations, cybersecurity issues, disinformation), but no direct or indirect harm has been confirmed or reported at this stage.
Thumbnail Image

Italy's Watchdog Blocks AI App DeepSeek Over Data-Privacy Concerns, Launches Probe

2025-01-31
Morningstar, Inc.
Why's our monitor labelling this an incident or hazard?
The event centers on the use of an AI system (DeepSeek chatbot) and concerns about its data privacy practices, which implicate potential violations of legal obligations protecting personal data (a form of fundamental rights). However, the article does not report any realized harm or injury resulting from the AI system's use, only regulatory intervention and investigation. Therefore, this is not an AI Incident but rather a governance and regulatory response to potential legal non-compliance, fitting the definition of Complementary Information.
Thumbnail Image

Ireland and Italy send data watchdog requests to DeepSeek: 'The data of millions of Italians is at risk' - RocketNews

2025-01-29
RocketNews
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI company operating a large language model, which processes personal data of users. The involvement of data protection authorities and consumer groups indicates concerns about potential violations of data protection laws (a breach of obligations under applicable law protecting fundamental rights). Although no direct harm has been confirmed, the authorities' actions and the removal of the app from stores in Italy suggest a credible risk that the AI system's data processing could lead to harm to individuals' privacy rights. This fits the definition of an AI Hazard, as the event plausibly could lead to an AI Incident if data misuse or breaches occur.
Thumbnail Image

DeepSeek AI app removed from Italian App stores amid data privacy investigation | Mint

2025-01-29
mint
Why's our monitor labelling this an incident or hazard?
The app DeepSeek is an AI system, and the data protection authority's concerns about GDPR breaches indicate potential violations of fundamental rights related to personal data privacy. Since the app has been removed pending investigation, no confirmed harm has occurred yet, but there is a plausible risk of legal violations. Therefore, this event qualifies as an AI Hazard, as the AI system's use could plausibly lead to a breach of rights if the concerns are validated.
Thumbnail Image

Korea to look into China's DeepSeek AI service over data privacy concerns

2025-01-31
The Korea Times
Why's our monitor labelling this an incident or hazard?
The article describes an official inquiry into an AI service's data practices amid privacy concerns, indicating potential risks related to violations of personal data protection laws and possibly human rights. However, no actual harm or incident has been reported so far; the event is about the plausible risk and regulatory response to the AI system's data handling. Therefore, this qualifies as an AI Hazard, as the AI system's use could plausibly lead to harm if data privacy violations occur, but no confirmed incident has yet taken place.
Thumbnail Image

Irish watchdog contacts DeepSeek amid data concerns

2025-01-29
RTE.ie
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system (an AI chatbot) whose use involves processing personal data. The involvement of data protection authorities and warnings about data storage and potential access by Chinese authorities indicate concerns about violations of data protection rights and privacy, which fall under violations of human rights or legal obligations. Since the article describes ongoing investigations and concerns but does not report any realized harm or breach, this situation represents a plausible risk of harm rather than an actual incident. Therefore, it qualifies as an AI Hazard rather than an AI Incident. The event is not merely complementary information because the main focus is on the potential risks and regulatory scrutiny, not on responses or ecosystem developments unrelated to harm or hazard.
Thumbnail Image

Ireland and Italy send data watchdog requests to DeepSeek: 'The data of millions of Italians is at risk' | TechCrunch

2025-01-29
TechCrunch
Why's our monitor labelling this an incident or hazard?
The article describes regulatory authorities sending information requests to DeepSeek regarding its data processing and privacy practices, reflecting concerns about potential risks to personal data and compliance with data protection laws. While these concerns highlight plausible risks of harm (e.g., violations of data protection rights), no actual harm or confirmed violations have been reported yet. The event is thus best classified as an AI Hazard, as it plausibly could lead to an AI Incident if data protection violations or harms occur, but currently remains an investigation and risk assessment stage. It is not Complementary Information because the focus is on the potential risk and regulatory action, not on updates to a known incident. It is not unrelated because it clearly involves an AI system (DeepSeek's large language model) and data protection concerns linked to its use.
Thumbnail Image

Italian data privacy agency probes China's DeepSeek AI, as EU tests GDPR compliance

2025-01-29
euronews
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (DeepSeek AI, a large language model) and concerns its development and use, specifically regarding data privacy and legal compliance. The investigation is triggered by concerns that the AI system may be processing EU personal data in ways that violate GDPR and possibly the EU AI Act, which could lead to violations of fundamental rights and intellectual property rights. However, since the probe is ongoing and no confirmed harm or breach has been reported, this situation represents a plausible risk of harm rather than a realized incident. Therefore, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Italy launches DeepSeek investigation over privacy concerns

2025-01-29
TechRadar
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek chatbot) whose data handling practices are under scrutiny for potential violations of privacy and data protection laws, which are fundamental rights. The investigation by Italy's data watchdog is a response to credible concerns about possible misuse or mishandling of personal data, including cross-border data transfers without safeguards, which could lead to significant harm to users' privacy rights. Since no actual harm or breach has been confirmed or reported yet, and the event centers on the potential for such harm, it fits the definition of an AI Hazard rather than an AI Incident. The event is not merely complementary information because it focuses on the regulatory probe and the plausible risks, not on a resolved or ongoing incident or a broader governance response. It is not unrelated because it clearly involves an AI system and potential harm.
Thumbnail Image

Data Protection Stopping Machine Overlords?

2025-01-28
mondaq.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) and concerns its use and data processing practices. However, the article does not report any realized harm such as injury, rights violations with confirmed impact, or other direct damages caused by the AI system. Instead, it focuses on regulatory scrutiny, potential legal breaches, and preventive measures. This fits the definition of Complementary Information, as it provides important context and updates on governance and societal responses to AI-related risks without describing a new AI Incident or AI Hazard.
Thumbnail Image

DeepSeek disappears from the Italian App Store and Google Play Store amid privacy complaint

2025-01-30
TechRadar
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (DeepSeek chatbot) and discusses regulatory scrutiny due to privacy concerns and potential GDPR violations. However, it does not report any realized harm such as injury, rights violations, or other direct consequences caused by the AI system. The removal of the app from stores is a regulatory response to potential risks, not an incident of harm or a direct hazard event. The focus is on the privacy complaint, regulatory investigation, and user caution advice, which fits the definition of Complementary Information as it updates on governance and societal responses to AI-related privacy concerns without describing a new AI Incident or AI Hazard.
Thumbnail Image

DeepSeek under fire in Europe as Ireland and Italy investigate data handling

2025-01-30
Economy Middle East
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (DeepSeek, an AI chatbot) whose data processing practices are under scrutiny by European regulators for potential breaches of GDPR and privacy rights. Although no realized harm is reported, the concerns about indefinite storage of personal data without anonymization and the potential for misuse represent credible risks of harm to fundamental rights. The regulatory actions and investigations indicate a plausible future risk rather than a confirmed incident. Hence, the event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

DeepSeek springs leak, are AI agent chats exposed?

2025-01-30
Cryptopolitan
Why's our monitor labelling this an incident or hazard?
The article explicitly states that DeepSeek, an AI language model system, leaked sensitive user data including chat logs and potentially passwords and API keys due to unprotected database storage. This is a direct harm to user privacy and security, fulfilling the criteria for harm to persons and violation of rights. The AI system's use and deployment caused this harm. Although the leak was later secured, the exposure already occurred. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Italian data protection watchdog goes after DeepSeek, seeks detailed info on personal data collection

2025-01-30
Firstpost
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (DeepSeek chatbot) and concerns its use and data handling practices. The Italian regulator's inquiry is about potential violations of data protection laws (GDPR), which relate to fundamental rights and privacy. However, the article does not describe any actual harm or breach that has occurred yet, only the potential for such harm if data is mishandled or improperly processed. Therefore, this is best classified as an AI Hazard, reflecting a credible risk of violation of rights and privacy due to the AI system's data practices, pending the outcome of the investigation.
Thumbnail Image

European Regulators Probe DeepSeek

2025-01-29
databreachtoday.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek's generative AI model) whose data processing practices are under regulatory scrutiny due to potential violations of data protection laws, including illegal transfer of personal data and lack of transparency about automated decision-making. However, the article does not report any realized harm such as injury, rights violations already occurring, or disruption caused by the AI system. Instead, it focuses on regulatory inquiries and potential compliance issues, which represent a plausible risk of harm if violations are confirmed. Therefore, this event is best classified as Complementary Information, as it provides updates on governance and regulatory responses related to AI systems and their compliance with legal frameworks, without describing a direct or indirect AI Incident or an imminent AI Hazard.
Thumbnail Image

Deepseek用户个资处理引疑虑 韩国当局将询问官方

2025-01-31
ار.اف.ای - RFI
Why's our monitor labelling this an incident or hazard?
The article describes regulatory scrutiny and investigations into the data handling practices of an AI chatbot, which is an AI system. The concerns relate to potential violations of personal data protection laws, which could constitute violations of rights if realized. However, no actual harm or breach has been reported yet; the authorities are seeking information and clarifications. Therefore, this situation represents a plausible risk of harm (privacy violations) but no confirmed incident. Hence, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

DeepSeek blocked from some app stores in Italy amid questions on data use

2025-01-29
aol.co.uk
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system (a chatbot) whose data collection and processing practices are under scrutiny by European regulators. The app's removal from app stores and ongoing investigations reflect concerns about potential violations of data protection laws (a breach of obligations under applicable law protecting fundamental rights). However, the article does not report any realized harm or incidents caused by the AI system's use or malfunction. The focus is on potential risks and regulatory responses, fitting the definition of an AI Hazard rather than an AI Incident or Complementary Information. The involvement of AI is explicit, and the plausible future harm relates to privacy violations and national security risks.
Thumbnail Image

DeepSeek AI gets hit with data privacy red flag by Italy and Ireland

2025-01-29
Mashable
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI chatbot, thus an AI system. The event involves the use of this AI system and concerns about how it processes and stores personal data, which could plausibly lead to violations of data protection laws and potentially harm users' privacy rights. However, the article does not report any realized harm or breach, only regulatory scrutiny and preventive actions. Therefore, this situation represents a plausible risk of harm rather than an actual incident. It fits the definition of an AI Hazard, as the development and use of the AI system could plausibly lead to an AI Incident involving data privacy violations.
Thumbnail Image

荷兰资料保障监管机构将对DeepSeek展开调查

2025-02-01
zaobao.com.sg
Why's our monitor labelling this an incident or hazard?
An AI system is involved as DeepSeek is an AI company with an AI model. The event concerns the use and development of AI systems and their data handling practices. However, the article does not report any realized harm or incident caused by the AI system, only regulatory concerns and potential risks of privacy violations. Therefore, this qualifies as an AI Hazard because the development and use of the AI system could plausibly lead to an AI Incident (privacy/data protection violations) if not properly managed. It is not Complementary Information because the focus is on the investigation and potential risk, not on updates or responses to a past incident. It is not unrelated because it clearly involves AI and regulatory scrutiny related to AI data use.
Thumbnail Image

中国DeepSeek掀疑虑 世界各国家纷纷采取应对措施(图) - 新闻 美国 - 看中国新闻网 - 海外华人 历史秘闻 科技新闻 -

2025-02-01
看中國
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions that DeepSeek's AI system is involved in unauthorized use of US AI models and chips, raising intellectual property and cybersecurity concerns. Multiple countries have taken or are taking regulatory or investigative actions due to realized or ongoing risks to personal data privacy and national security. These constitute violations of rights and harms to communities and individuals. The involvement of the AI system in these harms is direct and ongoing, meeting the criteria for an AI Incident rather than a mere hazard or complementary information. Therefore, this event is classified as an AI Incident.
Thumbnail Image

让整个纳斯达克都恐慌...

2025-02-01
mp.cnfol.com
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems (DeepSeek's advanced AI models) and their use in quantitative finance and AI development. However, it does not describe any realized harm or incident caused by these AI systems, nor does it present a plausible immediate risk of harm. The market reaction and geopolitical concerns are indirect consequences of competitive dynamics rather than AI system malfunctions or misuse causing harm. The article mainly provides an update on AI technological progress and its impact on markets and international competition, fitting the definition of Complementary Information rather than Incident or Hazard.
Thumbnail Image

袁斌:牛皮吹破?DeepSeek身陷四大质疑 | 中国AI | OpenAI | 人工智慧 | 新唐人电视台

2025-02-01
NTDChinese
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system whose development and use are under scrutiny for alleged unauthorized use of OpenAI's core data and models, which constitutes a breach of intellectual property rights (a legal obligation). The article reports ongoing investigations and actions taken by OpenAI and Microsoft, indicating that harm has already occurred or is actively being addressed. The misleading claims about DeepSeek's capabilities and cost also suggest potential market harm and deception. Therefore, this event qualifies as an AI Incident due to realized violations of rights and potential legal breaches linked directly to the AI system's development and use.
Thumbnail Image

袁斌:牛皮吹破?DeepSeek身陷四大质疑 | 中国AI | deepseek | OpenAI | 大纪元

2025-02-01
The Epoch Times
Why's our monitor labelling this an incident or hazard?
The event involves the use and development of AI systems, specifically large language models, with allegations that DeepSeek has stolen or improperly used OpenAI's models and data. This directly relates to a breach of intellectual property rights, which is a form of harm under the AI Incident category. The article describes ongoing investigations and actions taken by OpenAI and Microsoft, indicating that the harm is recognized and being addressed. Therefore, this qualifies as an AI Incident due to the realized violation of intellectual property rights through the development and use of AI systems.
Thumbnail Image

DeepSeek ya tiene a su primer enemigo europeo: este país se ha enfrentado al 'chatbot' chino bloqueándolo

2025-01-31
El Periódico
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (DeepSeek chatbot) and its use of personal data. The Italian authority's blocking action is a preventive measure due to lack of transparency and potential non-compliance with data protection laws, which protect fundamental rights. Since no actual harm or violation has been reported yet, but there is a credible risk of such harm if the system operates without proper safeguards, this event fits the definition of an AI Hazard rather than an AI Incident. It is not Complementary Information because it reports a new regulatory action and investigation, not a follow-up or response to a past incident. It is not Unrelated because it clearly involves an AI system and potential rights violations.
Thumbnail Image

2025-01-31
guancha.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (DeepSeek) used by U.S. military and government personnel. The concerns and subsequent blocking actions stem from the potential misuse of sensitive data and security risks associated with connecting to Chinese servers. Although there is no report of realized harm (such as data breaches or espionage), the plausible risk of such harms is credible and has prompted preventive measures. This fits the definition of an AI Hazard, where the AI system's use could plausibly lead to an AI Incident. The event is not an AI Incident because no direct or indirect harm has been reported yet. It is not Complementary Information because the main focus is on the emerging risk and preventive actions, not on updates or responses to a past incident. It is not Unrelated because the event clearly involves AI and potential harms.
Thumbnail Image

Italia da un paso para poner un cerco a DeepSeek, la nueva inteligencia artificial china

2025-01-31
El Periódico
Why's our monitor labelling this an incident or hazard?
The article details regulatory scrutiny and investigation into DeepSeek's data practices, highlighting concerns about privacy and compliance with legal frameworks. While these concerns imply potential risks, no actual harm or incident has been reported. The focus is on the regulatory response and ongoing investigation, which fits the definition of Complementary Information. It is not an AI Incident because no realized harm is described, nor an AI Hazard because the event is about regulatory action rather than a credible imminent risk of harm. It is not unrelated because the AI system and its use are central to the event.
Thumbnail Image

DeepSeek遭多国审查 台湾禁公务机关使用 | deepseek | AI | 人工智能 | 大纪元

2025-01-31
The Epoch Times
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system developed by a Chinese startup, and its use involves processing user data, which raises concerns about cross-border data transmission and information security. The bans and investigations by multiple governments indicate a credible risk that the AI system could lead to harm related to information security and privacy breaches, which fall under harm to communities or violations of rights. Since no actual harm has been reported but the risk is plausible and significant, this event qualifies as an AI Hazard rather than an AI Incident. The focus is on potential future harm and ongoing assessments rather than realized harm.
Thumbnail Image

荷兰数据保护机构将对DeepSeek展开调查 | deepseek | 爱尔兰 | 德国 | 大纪元

2025-01-31
The Epoch Times
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system (a generative AI) whose data collection and privacy practices are under investigation due to concerns about compliance with GDPR and Chinese laws that may compel data sharing with government intelligence. The article details regulatory warnings, investigations, and precautionary measures by multiple countries, indicating potential risks but no confirmed harm or incident caused by the AI system. The focus is on governance responses and risk assessment rather than a concrete AI Incident or an imminent AI Hazard. Hence, the event fits the definition of Complementary Information, as it enhances understanding of AI-related privacy and security concerns and regulatory actions without reporting a direct or indirect harm or a plausible future harm event.
Thumbnail Image

DeepSeek sotto accusa, sospetti di furto di proprietà intellettuale

2025-01-29
informazione.it
Why's our monitor labelling this an incident or hazard?
The article describes an ongoing investigation into DeepSeek's AI system and its data practices, with suspicions of intellectual property theft and concerns about personal data protection. However, it does not report any actual harm or confirmed violations resulting from the AI system's use or malfunction. The concerns and regulatory actions indicate plausible risks of harm, particularly regarding data privacy and intellectual property rights, but these remain potential rather than realized. Therefore, this event fits the definition of an AI Hazard, as the AI system's development and use could plausibly lead to incidents involving data breaches or IP violations, but no incident has yet occurred.
Thumbnail Image

Italien sperrt DeepSeek - zum Schutz der Daten italienischer Nutzer

2025-01-30
watson.ch
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek) whose use is restricted by a regulatory authority due to concerns about data protection and privacy. Although no direct harm has been reported, the authority's action indicates a credible risk that the AI system's operation could lead to violations of data protection rights, which are a form of legal rights protection. Therefore, this event represents an AI Hazard, as the AI system's use could plausibly lead to an AI Incident involving violations of rights if not controlled. The article does not describe an actual incident of harm but a preventive measure to avoid such harm.
Thumbnail Image

多国对DeepSeek使用设限

2025-01-30
finance.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (DeepSeek's AI large model) and concerns about its use of personal data and national security implications. The involvement is in the use and development of the AI system. Although multiple data protection authorities and governments are investigating or restricting the app, no direct or indirect harm has been reported as having occurred. The concerns and restrictions indicate a credible potential for harm, such as privacy violations or security risks, but these remain potential rather than realized. Hence, this qualifies as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

多国给DeepSeek使用设限

2025-01-30
app.myzaker.com
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system (an AI large model) whose use is being scrutinized by authorities for potential national security impacts and data privacy issues. The actions taken (app removal, investigation, and calls for caution) reflect concerns about plausible future harm but do not report any realized harm or incident. Therefore, this event qualifies as an AI Hazard, as the AI system's use could plausibly lead to harm, but no harm has been confirmed or reported yet.
Thumbnail Image

Italy's regulator blocks Chinese AI app DeepSeek on data protection

2025-01-31
Yahoo Finance
Why's our monitor labelling this an incident or hazard?
The DeepSeek chatbot is an AI system whose use involves processing personal data. The Italian regulator's blocking order is based on concerns about privacy policy compliance and data protection, which relate to potential violations of fundamental rights (privacy). Since no actual harm or breach has been reported, but the risk is credible and the regulator has taken preventive action, this fits the definition of an AI Hazard. The event is not Complementary Information because it is not an update on a previously reported incident but a new regulatory action based on potential harm. It is not an AI Incident because no realized harm or violation has been confirmed yet.
Thumbnail Image

Ireland and Italy send data watchdog requests to DeepSeek: 'The data of millions of Italians is at risk'

2025-01-29
Yahoo Finance
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (DeepSeek's large language model) and concerns about its data processing and privacy practices. However, there is no indication that any harm has yet occurred due to the AI system's development, use, or malfunction. The data protection authorities are investigating potential risks and seeking information to determine compliance and safety. This fits the definition of Complementary Information, as it provides updates on governance responses and regulatory scrutiny related to AI systems, without reporting a concrete AI Incident or an imminent AI Hazard.
Thumbnail Image

Italy restricts access to Chinese AI app 'DeepSeek', opens probe

2025-01-31
Wion
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (DeepSeek chatbot) whose use of personal data is under scrutiny by a regulatory authority. The restriction and investigation are responses to potential violations of data protection laws, which relate to human rights and legal obligations. Since no realized harm is reported, but there is a credible risk of violation of rights if the system continues processing data without compliance, this constitutes an AI Hazard. It is not Complementary Information because the main focus is the regulatory action and investigation, not a response to a past incident. It is not an AI Incident because no actual harm or violation has been confirmed or reported yet.
Thumbnail Image

Italy blocks AI app DeepSeek over data privacy concerns

2025-01-31
The Local Italy
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (DeepSeek chatbot) and concerns its use and processing of personal data, which relates to violations of data protection laws and potentially users' privacy rights. The blocking and investigation by the authority indicate that the AI system's use has led to a breach or potential breach of legal obligations protecting fundamental rights. Since the event describes an actual regulatory action due to realized or ongoing violations related to the AI system's data processing, it qualifies as an AI Incident under violations of human rights or breach of applicable law protecting fundamental rights.
Thumbnail Image

DeepSeek blocked by some app stores in Italy to protect users' personal data

2025-01-30
Market Beat
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system (a chatbot) whose use involves processing personal data. The blocking by the data protection authority is a governance response to concerns about violations of data protection laws, which are designed to protect fundamental rights related to privacy. Although no direct harm is reported as having occurred, the investigation and blocking aim to prevent potential violations of users' rights. Therefore, this event is best classified as Complementary Information, as it details a societal and governance response to AI-related privacy concerns rather than reporting a realized AI Incident or a plausible future hazard.
Thumbnail Image

DeepSeek blocked by some app stores in Italy to protect users' personal data - WTOP News

2025-01-30
WTOP News
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek chatbot) whose use has raised concerns about personal data collection, storage, and user notification, implicating potential violations of data protection laws. However, the article does not report any realized harm or injury but rather a regulatory intervention to prevent possible harm to users' personal data privacy. Therefore, this event is best classified as Complementary Information, as it details a governance response and investigation related to AI use and data protection, without describing a direct or indirect AI Incident or a plausible future AI Hazard.
Thumbnail Image

Italy data watchdog restricts DeepSeek, opens probe

2025-01-30
spacedaily.com
Why's our monitor labelling this an incident or hazard?
The Italian data watchdog's action is based on concerns about how DeepSeek collects and processes personal data, which could lead to violations of privacy rights and data protection laws. Since no actual harm or breach has been confirmed or reported, and the authority's measures are precautionary and investigatory, this fits the definition of an AI Hazard. The AI system's use of personal data without clear compliance or transparency poses a credible risk of harm, justifying the restriction and probe. There is no indication of realized harm or incident at this stage.
Thumbnail Image

DeepSeek blocked by some app stores in Italy to protect users' personal data

2025-01-30
Yahoo Finance
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI application, and the blocking by the data protection authority is due to concerns about personal data collection, storage, and user notification, which implicates potential violations of fundamental rights related to data privacy. Since the app has been downloaded by millions and the authority is taking regulatory action to prevent harm to users' personal data, this event involves the use of an AI system leading to a potential or ongoing violation of rights. However, the article does not specify that harm has already occurred, only that access is blocked to protect users and an investigation is ongoing. This indicates a plausible risk of harm rather than confirmed harm at this stage, fitting the definition of an AI Hazard rather than an AI Incident.
Thumbnail Image

Italy Blocks Chinese DeepSeek A.I. Over Data Privacy Concerns

2025-01-30
The Rio Times
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (DeepSeek's AI chatbot) and its use in Italy. The regulatory authority's action is a direct response to concerns about data privacy and legal compliance, which relates to human rights and data protection obligations. However, there is no indication that the AI system has directly or indirectly caused harm yet; rather, the blocking is a preventive measure. This aligns with the definition of Complementary Information, which includes governance responses and regulatory actions related to AI systems. Since no actual harm or plausible imminent harm is described, it is not an AI Incident or AI Hazard. The article focuses on the regulatory decision and its implications, making Complementary Information the appropriate classification.
Thumbnail Image

Italy Blocks DeepSeek Chatbot App Over Data Protection Concerns

2025-01-30
Republic World
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the DeepSeek chatbot app) whose use has raised concerns about data protection and privacy, which are aspects of fundamental rights. The blocking action and investigation by the Italian authority indicate that the AI system's development or use has led to a breach or potential breach of legal obligations protecting personal data. Since the harm (violation of data protection rights) is directly linked to the AI system's use and has prompted regulatory intervention, this qualifies as an AI Incident under the framework.
Thumbnail Image

Italy blocks Chinese AI tool DeepSeek over privacy concerns

2025-01-31
therecord.media
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system that processes personal data, and the regulatory investigations and bans stem from concerns about its data privacy compliance. The event involves the use and development of an AI system that is alleged to be violating data protection laws, which protect fundamental rights. Although no specific harm is reported as having occurred yet, the regulatory ban and investigations indicate that the AI system's operation is considered to have caused or could cause violations of rights. Since the event focuses on the regulatory ban and investigation due to privacy concerns (a breach of obligations under applicable law protecting fundamental rights), this qualifies as an AI Incident.
Thumbnail Image

International regulators probe how DeepSeek is using data. Is the app safe to use?

2025-01-31
KGOU
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI chatbot that collects and processes user data, fitting the definition of an AI system. The article focuses on regulatory probes and privacy concerns about how the data is handled and the potential for misuse by the Chinese government. Although there is no evidence of actual data breaches or misuse causing harm, the plausible risk of privacy violations and national security threats is credible and significant. Therefore, this event represents an AI Hazard, as it could plausibly lead to an AI Incident if data misuse or breaches occur in the future. The article does not describe any realized harm or incident, so it is not an AI Incident. It is also not merely complementary information or unrelated, as the focus is on potential harm from the AI system's use and data practices.
Thumbnail Image

Italy blocks Chinese AI model DeepSeek over data privacy concerns

2025-01-31
Business Insurance
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek) whose use of personal data is under scrutiny. The blocking action is a response to potential violations of data privacy rights, which falls under violations of human rights or legal obligations protecting fundamental rights. However, the article does not report actual harm occurring but rather a preventive regulatory measure due to insufficient information, indicating a plausible risk of harm if the system were to operate without compliance. Therefore, this qualifies as Complementary Information, as it provides an update on governance and regulatory response to AI use rather than reporting a realized incident or a direct hazard.
Thumbnail Image

Italy blocks DeepSeek

2025-01-31
The Herald
Why's our monitor labelling this an incident or hazard?
The article describes regulatory actions taken against an AI system due to concerns about data privacy and legal compliance, which relates to potential violations of rights. However, there is no indication that actual harm has occurred yet, only that the regulator is acting to prevent harm. The event focuses on governance and regulatory response rather than a direct or indirect harm caused by the AI system. Hence, it fits the definition of Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

Italy blocks China's DeepSeek over privacy concerns

2025-01-31
DNyuz
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek chatbot) whose use has led to a regulatory action due to privacy violations, which constitute a breach of legal obligations protecting fundamental rights. The blocking order and investigation by the Italian data protection authority indicate that harm in the form of legal violations has occurred or is ongoing. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly or indirectly led to a breach of applicable law protecting fundamental rights.
Thumbnail Image

Italy blocks China's DeepSeek over privacy concerns

2025-01-31
POLITICO
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek chatbot) whose use has led to regulatory action due to privacy concerns, implying a breach of legal obligations protecting fundamental rights. However, the article does not describe actual harm occurring but rather a regulatory intervention to prevent potential harm or legal violations. Therefore, this is best classified as Complementary Information, as it provides an update on governance and regulatory response to AI use rather than describing a realized AI Incident or a plausible future hazard.
Thumbnail Image

DeepSeek AI Blocked from App Stores Over Privacy Concerns

2025-01-31
TechnoCodex
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system involved in processing user data, and the regulatory ban stems from concerns about its data privacy practices. While this raises important issues about potential legal violations, the article does not report any actual harm or incident caused by the AI system. The event centers on the potential for harm due to non-compliance and the regulatory response, which fits the definition of Complementary Information as it provides context on governance and oversight rather than describing a new AI Incident or AI Hazard.
Thumbnail Image

Italian Regulator Blocks DeepSeek Over Personal Data Concerns

2025-01-31
NTD
Why's our monitor labelling this an incident or hazard?
The article describes regulatory investigations and access blocks due to concerns about personal data handling and potential national security risks related to the AI system DeepSeek. While these concerns indicate plausible risks of harm (such as privacy violations and censorship), no actual harm or incident has been reported yet. The event primarily details ongoing inquiries, regulatory responses, and warnings, which align with Complementary Information rather than an AI Incident or AI Hazard. The AI system's involvement is clear, and the potential for harm is recognized, but the article's main focus is on the regulatory and governance responses rather than a realized incident or a direct hazard event.
Thumbnail Image

Italy Investigates DeepSeek Over Privacy Concerns

2025-01-29
WebProNews
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (DeepSeek's chatbot AI model) and concerns about its data collection and processing practices. The Italian privacy watchdog's investigation is due to possible risks to personal data of millions, implying potential violations of privacy rights if the AI system mishandles data. However, the article does not report any actual harm or confirmed violations yet, only a credible risk and regulatory scrutiny. Hence, this is an AI Hazard because the AI system's use could plausibly lead to harm (privacy violations), but no incident has occurred so far.
Thumbnail Image

Factbox-Italy's Privacy Watchdog Taking on Big Tech

2025-01-29
US News & World Report
Why's our monitor labelling this an incident or hazard?
The Garante's investigations and sanctions against AI companies for unlawful data processing and privacy breaches directly relate to the use of AI systems impacting individuals' personal data rights. The article details actual enforcement actions and penalties, indicating harm has occurred due to AI system use. This fits the definition of an AI Incident because the AI systems' use has directly or indirectly led to violations of fundamental rights (privacy/data protection).
Thumbnail Image

DeepSeek blocked from some app stores in Italy amid questions on data use

2025-01-29
the Guardian
Why's our monitor labelling this an incident or hazard?
The article describes regulatory actions and investigations into DeepSeek's data practices, reflecting concerns about possible misuse of personal data and compliance with legal frameworks. While these concerns indicate plausible future harm (e.g., violations of privacy rights or national security risks), there is no evidence that harm has already occurred. The AI system's involvement is clear, and the event could plausibly lead to an AI Incident if data misuse or rights violations are confirmed. Therefore, this qualifies as an AI Hazard. It is not Complementary Information because the main focus is not on responses to a past incident but on ongoing regulatory scrutiny. It is not an AI Incident because no realized harm is reported yet.
Thumbnail Image

Italy issues first data watchdog request to DeepSeek: 'Millions of Italians' data could be at risk' - Research Snipers

2025-01-29
Research Snipers
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (DeepSeek's large language model) and concerns about its data practices that could lead to violations of data protection laws (a form of legal rights violation). The Italian DPA's formal request indicates regulatory concern about potential misuse of personal data affecting millions, which could plausibly lead to an AI Incident if the issues are confirmed or unaddressed. However, no actual harm or breach has been confirmed or reported yet, so it is not an AI Incident. The event is not merely complementary information because it reports a formal regulatory action indicating credible risk. Hence, the classification as AI Hazard is appropriate.
Thumbnail Image

Italy's data watchdog has questions for DeepSeek

2025-01-29
spacedaily.com
Why's our monitor labelling this an incident or hazard?
The article describes a regulatory authority's inquiry into the data practices of an AI system (DeepSeek) due to potential risks to personal data privacy. However, there is no indication that any harm has occurred yet, only that the authority is seeking information and has given a deadline for response. This fits the definition of Complementary Information, as it provides context and updates on governance and oversight related to AI systems, without describing an actual AI Incident or AI Hazard.
Thumbnail Image

Why DeepSeek Has Been Blocked on Apple and Google's App Stores in Italy

2025-01-30
Gadgets 360
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system (an AI assistant app) whose use involves processing personal data. The Italian data protection authority's investigation and subsequent blocking of the app from app stores indicate concerns about possible violations of data protection laws (GDPR), safeguarding of minors, bias, and electoral interference. Although no actual harm is reported as having occurred yet, the regulatory scrutiny and app removal reflect a credible risk that the AI system's use could lead to violations of rights and harm to communities. Hence, this event fits the definition of an AI Hazard, as the AI system's use could plausibly lead to an AI Incident if unaddressed.
Thumbnail Image

DeepSeek is no game: The dangers of China's new AI

2025-01-30
EL PAÍS English
Why's our monitor labelling this an incident or hazard?
The article does not report a concrete incident where harm has already occurred due to DeepSeek or related AI systems, so it does not meet the criteria for an AI Incident. However, it extensively discusses credible risks and potential harms such as disinformation, erosion of public trust, authoritarian surveillance, and privacy violations that could plausibly result from the use of DeepSeek. The involvement of an AI system is explicit, and the concerns relate to its development and use. The article also highlights geopolitical and legal issues around data protection and misuse, reinforcing the plausibility of future harm. Thus, the event fits the definition of an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

DeepSeek privacy under investigation in US and Europe; App Store impact

2025-01-30
9to5Mac
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (DeepSeek chatbot) and discusses regulatory investigations into its data privacy practices, focusing on GDPR compliance. The potential harm is violations of privacy rights and legal obligations, which is a recognized AI Incident category. However, no confirmed harm or breach has been reported yet; the investigations and app removals are precautionary or enforcement steps. Thus, the event is best classified as an AI Hazard, reflecting plausible future harm from the AI system's use if privacy violations are confirmed. The presence of regulatory scrutiny and app removal supports the plausibility of harm but does not confirm it. Therefore, AI Hazard is the most accurate classification.
Thumbnail Image

Major AI Security Breach: DeepSeek's Database Exposed Sensitive Data

2025-01-30
Security Boulevard
Why's our monitor labelling this an incident or hazard?
The incident involves an AI system (DeepSeek's chatbot) and a security breach exposing sensitive data related to the AI infrastructure. This exposure constitutes harm to property and potentially to communities if the data is exploited maliciously. The breach is a direct consequence of the AI system's operational security failure, thus qualifying as an AI Incident under the framework. The concerns about data being accessed by foreign governments and the U.S. Navy banning the app further underscore the seriousness of the harm and the direct link to the AI system.
Thumbnail Image

China's DeepSeek faces its first ban: Italy blocks it on iPhones and Android phones - The Times of India

2025-01-30
The Times of India
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions DeepSeek as an AI app and details regulatory concerns about its use of personal data, potential bias, and electoral interference. Although no actual harm or incident has been reported, the app's removal from app stores and ongoing investigations by data protection authorities reflect credible concerns about possible future harms. The AI system's development and use could plausibly lead to violations of privacy rights and manipulation, fitting the definition of an AI Hazard. There is no indication that harm has already occurred, so it is not an AI Incident. The article is not merely complementary information since it focuses on the ban and investigation, which are responses to potential risks rather than updates on past incidents.
Thumbnail Image

DeepSeek blocked from some app stores in Italy amid questions on data use

2025-01-29
Yahoo Finance
Why's our monitor labelling this an incident or hazard?
The event centers on regulatory scrutiny and investigation into the AI system's data use and privacy compliance, with no direct or realized harm reported yet. The blocking of the app from stores is a precautionary measure amid concerns about potential misuse or mishandling of personal data. Since no actual harm has occurred but there is a plausible risk of harm related to data privacy and security, this qualifies as an AI Hazard rather than an Incident. The involvement of the AI system (DeepSeek chatbot) is explicit, and the potential for harm through data misuse or breaches is credible, justifying classification as an AI Hazard.
Thumbnail Image

Italy Blocks DeepSeek From Some App Stores Over Data Usage Issues

2025-01-29
NewsX World
Why's our monitor labelling this an incident or hazard?
The article describes regulatory actions taken against DeepSeek due to concerns about data privacy and potential misuse of personal data, which could plausibly lead to violations of privacy rights and data protection laws. However, there is no indication that actual harm or violations have occurred yet. The AI system's development and use raise plausible future risks, but the event is primarily about investigation and precautionary removal from app stores. Therefore, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Chinese AI app DeepSeek gets 20 day deadline from Italian data protection authority - The Times of India

2025-01-29
The Times of India
Why's our monitor labelling this an incident or hazard?
This event concerns regulatory scrutiny and a data protection investigation into an AI system's data practices, focusing on compliance and transparency. While there is concern about potential risks to personal data, no actual harm or violation has been confirmed or reported yet. The event describes a potential risk scenario where misuse or mishandling of personal data by the AI system could plausibly lead to harm, but no direct or indirect harm has materialized at this stage. Therefore, it qualifies as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

DeepSeek AI app removed from Italy's app stores amid data protection concerns By Investing.com

2025-01-29
Investing.com India
Why's our monitor labelling this an incident or hazard?
The event centers on regulatory actions and investigations into the app's compliance with data protection laws, which is a governance and societal response to potential AI-related privacy issues. There is no indication that harm has occurred or that the app's use or malfunction has directly or indirectly led to injury, rights violations, or other harms. Therefore, this is Complementary Information providing context on oversight and potential future risk assessment rather than an AI Incident or AI Hazard.
Thumbnail Image

Italy regulator seeks info from DeepSeek on data protection

2025-01-28
Yahoo! Finance
Why's our monitor labelling this an incident or hazard?
The article describes a regulatory authority requesting information from an AI company about its data use and legal basis, which is a governance and compliance matter. There is no indication that harm has occurred or that there is a plausible risk of harm from the AI system's use at this stage. Therefore, this is best classified as Complementary Information, as it provides context on governance and oversight related to AI data practices without describing an AI Incident or AI Hazard.
Thumbnail Image

DeepSeek unavailable in Apple and Google app stores in this country. Here's why

2025-01-29
Economic Times
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek, an AI chatbot) whose use is under scrutiny by a data protection authority due to potential high risks to personal data privacy. The app's removal from app stores in Italy is a precautionary measure pending investigation. Since no direct harm has occurred yet but there is a credible risk of harm to personal data privacy, this qualifies as an AI Hazard. It is not an AI Incident because no realized harm is reported, nor is it Complementary Information or Unrelated.
Thumbnail Image

AI spy! India keeps vigil as privacy fears run deep

2025-01-30
Economic Times
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek AI model) whose use involves data collection and processing with potential cross-border data transfer to China. The Indian government is monitoring for possible misuse or unauthorized transfer of personal data, which could lead to violations of privacy rights and data protection laws. No actual harm or incident has been reported yet, but the plausible risk of harm (privacy violations, data misuse) is credible and under active scrutiny. Hence, this fits the definition of an AI Hazard rather than an AI Incident or Complementary Information. The article does not describe a realized harm or incident, nor is it merely a general update or unrelated news.
Thumbnail Image

AI SPY! India Keeps Vigil as Privacy Fears Run Deep

2025-01-30
Economic Times
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek AI model app) whose use raises concerns about data privacy and sovereignty, which are fundamental rights. The article explicitly states that the Indian government is monitoring the situation due to potential data transfer to China and possible misuse. Since no actual harm or violation has been confirmed yet, but there is a credible risk that such misuse could occur, this fits the definition of an AI Hazard. The event does not describe a realized AI Incident, nor is it merely complementary information or unrelated news.
Thumbnail Image

DeepSeek impact: India keeps vigil as privacy fears run deep - ET Telecom

2025-01-30
ETTelecom.com
Why's our monitor labelling this an incident or hazard?
The article highlights plausible future risks related to AI systems' data handling and privacy policies, which could lead to violations of privacy rights or data sovereignty issues. Since no actual harm or incident has occurred yet, and the focus is on monitoring and concerns, this fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

South Korea to send inquiry to China's DeepSeek over data privacy concerns By IANS

2025-02-01
Investing.com India
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (DeepSeek) involved in collecting and using personal data for AI training, which raises concerns about privacy violations—a breach of obligations under applicable law protecting fundamental rights. However, the event currently describes a regulatory inquiry and potential investigation, with no confirmed realized harm yet. Therefore, it represents a plausible risk of harm (privacy violations) that could lead to an AI Incident if confirmed. As such, this is best classified as Complementary Information, since the main focus is on the regulatory response and ongoing assessment rather than a confirmed AI Incident or AI Hazard.
Thumbnail Image

Italy regulator seeks information from DeepSeek on data protection

2025-01-28
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek) and concerns its data handling practices, which relate to privacy and data protection rights. However, the article does not report any realized harm or incident caused by the AI system, nor does it describe a specific event where harm occurred or was narrowly avoided. Instead, it details a regulatory investigation and information request, which is a governance response to potential risks. Therefore, this qualifies as Complementary Information, as it provides context on societal and regulatory responses to AI use and potential privacy concerns, without describing a new AI Incident or AI Hazard.
Thumbnail Image

"It's The New TikTok": National Security Concerns Spike Over China's DeepSeek

2025-01-29
Forbes
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system involved in data collection and content generation. While no direct harm has been confirmed, the article highlights credible risks of data being accessed by a foreign government, potential misuse of personal data, and the AI's capability to produce malware. The bans and investigations indicate recognition of plausible future harms. Since the harms are potential and not yet realized, this event fits the definition of an AI Hazard rather than an AI Incident.
Thumbnail Image

DeepSeek app unavailable in Apple and Google app stores in Italy

2025-01-29
CNBC
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (DeepSeek AI application) and its use of personal data, which is under investigation by a data protection authority. While this raises concerns about compliance with data protection laws and potential privacy violations, the article does not report any actual harm or violation having occurred yet. The app's removal from stores is a precautionary or regulatory action. Therefore, this event is best classified as Complementary Information, as it provides context on governance and regulatory responses to AI-related data privacy concerns without describing a specific AI Incident or AI Hazard.
Thumbnail Image

DeepSeek under fire as Italian data protection authority investigates potential privacy concerns

2025-01-30
Yahoo News UK
Why's our monitor labelling this an incident or hazard?
The article describes regulatory investigations into DeepSeek's AI system regarding its data handling practices and compliance with privacy laws. Although no confirmed harm has been reported yet, the concerns about unlawful data collection, storage on foreign servers, and potential misuse of personal data constitute a credible risk of harm to individuals' privacy rights. The involvement of multiple data protection authorities and government warnings further support the plausibility of future harm. Since the event does not describe realized harm but focuses on potential risks and regulatory responses, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

DeepSeek sending your private chats to China? Everything you should know before using this ChatGPT-rival

2025-01-28
MoneyControl
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI chatbot system that collects and shares users' private chats and personal data, which directly implicates violations of privacy and data protection rights. The sharing of personal conversations and sensitive information without adequate safeguards or user consent constitutes a breach of fundamental rights and applicable laws. The AI system's use has directly led to these harms, qualifying this event as an AI Incident under the framework's criteria for violations of human rights and legal obligations.
Thumbnail Image

International regulators probe how DeepSeek is using data. Is the app safe to use?

2025-01-31
NPR
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI chatbot that collects and processes user data, qualifying it as an AI system. The article details regulatory probes and warnings about the app's data collection and storage practices, highlighting concerns about potential misuse of data by the Chinese government. Although no direct harm has been reported, the plausible future risk of privacy violations and national security threats due to data misuse fits the definition of an AI Hazard. The article does not describe any realized harm or incident but focuses on potential risks and regulatory scrutiny, which aligns with the AI Hazard classification.
Thumbnail Image

China's DeepSeek AI is watching what you type

2025-01-29
NBC News
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek, a large language model chatbot) and discusses risks related to its use, particularly privacy and surveillance risks under Chinese law. These risks could plausibly lead to violations of rights or harm to individuals if data is accessed or misused. However, the article does not report any realized harm or incident caused by the AI system, only potential risks and warnings. Therefore, this qualifies as an AI Hazard, as the AI system's use could plausibly lead to harm, but no direct or indirect harm has yet occurred or been reported.
Thumbnail Image

South Korea seeks DeepSeek's policy on personal data collection

2025-01-31
Anadolu Ajansı
Why's our monitor labelling this an incident or hazard?
The article describes governmental actions probing and restricting the use of DeepSeek due to concerns about personal data collection and security risks. Although no specific harm has materialized, the potential for harm to privacy and security is credible and plausible. The AI system's development and use raise concerns that could lead to violations of rights or security incidents. Therefore, this situation qualifies as an AI Hazard, as the AI system's involvement could plausibly lead to an AI Incident, but no direct harm has been confirmed yet.
Thumbnail Image

Italy seeks Chinese DeepSeek AI details on data protection

2025-01-29
Anadolu Ajansı
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek AI chatbot) and concerns the use and handling of personal data, which relates to potential violations of data protection and privacy rights. However, the article describes an ongoing regulatory inquiry without reporting any realized harm or confirmed violation. Therefore, it does not describe an AI Incident but rather a potential risk or concern that could lead to harm if data protection is not ensured. This fits the definition of an AI Hazard, as the investigation reflects a plausible risk of harm to individuals' data privacy stemming from the AI system's use.
Thumbnail Image

China's DeepSeek AI hit by information request from Italy's data protection watchdog

2025-01-29
engadget
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (DeepSeek AI chatbot) and concerns about its data handling practices, which could plausibly lead to violations of data protection laws and potential harm to individuals' privacy. However, no actual harm or incident has been reported yet; the event is about a regulatory inquiry and potential risk assessment. Therefore, this qualifies as an AI Hazard, as the watchdog's request is based on plausible future harm related to data privacy and security risks from the AI system's use.
Thumbnail Image

S. Korea to send inquiry to China's DeepSeek over data privacy concerns | Yonhap News Agency

2025-01-31
Yonhap News Agency
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI startup collecting personal data for AI training, which involves an AI system. The inquiry is prompted by concerns over data privacy risks, indicating potential violations of personal rights if the data is misused or inadequately protected. No actual harm or incident is reported yet, only the potential for harm. Hence, this fits the definition of an AI Hazard, as the AI system's use could plausibly lead to an AI Incident involving privacy violations.
Thumbnail Image

Italian DPA Asks DeepSeek About Its User Data - TechNadu

2025-01-29
TechNadu
Why's our monitor labelling this an incident or hazard?
The article describes regulatory and consumer organization actions questioning DeepSeek's data practices and security vulnerabilities, which could plausibly lead to harm such as privacy violations or malicious use of the AI system. However, no direct or indirect harm has been confirmed or reported as having occurred yet. The focus is on potential risks, investigations, and concerns rather than an actual incident causing harm. Therefore, this qualifies as an AI Hazard, reflecting credible potential for harm due to data privacy issues and security vulnerabilities associated with the AI system.
Thumbnail Image

DeepSeek is no longer available to download in Italy: Report

2025-01-31
India Today
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (DeepSeek chatbot) and concerns about its data handling practices, privacy, cybersecurity, and disinformation risks. The Italian data protection authority's intervention and the US Navy's advisory highlight credible concerns about potential harms. However, there is no indication that these harms have materialized yet; rather, the app was removed preemptively due to insufficient responses from the company. The presence of potential security threats and privacy violations that could plausibly arise from the AI system's use or misuse fits the definition of an AI Hazard. There is no evidence of direct or indirect harm having occurred, so it cannot be classified as an AI Incident. The article is not primarily about responses or updates to a past incident, so it is not Complementary Information. It is clearly related to AI and potential harms, so it is not Unrelated.
Thumbnail Image

Italy regulator seeks information from DeepSeek on data protection

2025-01-29
The Hindu
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (DeepSeek) and concerns its data protection practices, which relate to privacy and legal compliance. However, there is no indication that any harm has occurred yet; the regulator is seeking information to assess compliance. This is a governance and regulatory response to potential AI-related risks but does not describe an incident or a hazard with realized or plausible harm. Therefore, it fits the category of Complementary Information, as it provides context on societal and governance responses to AI use and potential risks.
Thumbnail Image

DeepSeek AI is collects your keystrokes and may never delete them

2025-01-29
Tom's Guide
Why's our monitor labelling this an incident or hazard?
DeepSeek AI is an AI system as it involves an AI platform with chat tools and data processing. The event focuses on the use and data collection practices of this AI system, which could plausibly lead to violations of privacy and human rights due to the collection of sensitive data like keystroke patterns and potential censorship. However, the article does not report any realized harm or incident but raises credible concerns about future risks. Therefore, this qualifies as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

DeepSeek AI banned in Italy as data privacy concerns pile up

2025-01-31
Tom's Guide
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (DeepSeek AI chatbot) whose use has led to realized harms, specifically violations of data privacy rights and breaches of GDPR and other legal frameworks. The Italian Data Protection Authority's ban and investigation confirm that the AI system's development and use have directly caused harm to users' fundamental rights. The collection and sharing of invasive personal data, unclear user control over data, and national security concerns constitute violations of human rights and legal obligations. These harms are materialized, not just potential, making this an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Explaining DeepSeek: The AI Disruptor That's Raising Red Flags for Privacy and Security | McAfee Blog

2025-01-31
McAfee Blogs
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (DeepSeek's advanced language models and chatbot) and a security breach exposing sensitive user data. This breach is a direct harm to users' privacy and data security, which qualifies as a violation of human rights and harm to individuals. The AI system's role is pivotal as the exposed data includes chat logs and operational details tied to the AI service. The incident has already occurred and caused harm, not just a potential risk, so it is classified as an AI Incident rather than an AI Hazard or Complementary Information.
Thumbnail Image

Italian regulator asks DeepSeek for information about data collection

2025-01-29
therecord.media
Why's our monitor labelling this an incident or hazard?
The article details a data privacy regulator requesting information from an AI company about its data collection and processing practices. While this reflects concerns about potential privacy violations, no actual harm or breach is reported. The event is a regulatory action aimed at oversight and ensuring compliance, which fits the definition of Complementary Information as it provides context and updates on governance responses to AI-related issues without describing a specific incident or hazard.
Thumbnail Image

The International DeepSeek Crackdown Is Underway

2025-01-31
Gizmodo
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek-R1) whose use and data management practices have led to concerns about violations of personal data privacy and potential breaches of data protection laws (e.g., GDPR). The exposure of chat histories and sensitive information constitutes harm to individuals' privacy rights, which falls under violations of human rights and legal obligations protecting fundamental rights. The blocking of access and investigations indicate that harm is either ongoing or has already occurred. Therefore, this qualifies as an AI Incident because the AI system's use has directly or indirectly led to violations of rights and harm to users' privacy.
Thumbnail Image

Security experts urge caution using DeepSeek AI chatbot because of China links

2025-01-30
Mirror
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system (a large language model chatbot) whose use involves processing user inputs and data. The article does not report any realized harm but raises credible concerns about potential privacy violations and data misuse due to the app's data storage and sharing policies, which could plausibly lead to violations of rights and harms to users' privacy. Therefore, this situation fits the definition of an AI Hazard, as the development and use of DeepSeek could plausibly lead to an AI Incident involving privacy and security harms.
Thumbnail Image

DeepSeek blocked on Apple, Google app stores in Italy

2025-01-29
Khaleej times
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (DeepSeek AI assistant) and concerns about its use of personal data and potential for bias and electoral interference. However, the article does not report any realized harm or incident caused by the AI system; rather, it describes regulatory scrutiny and preventive blocking of the app to ensure compliance with data protection laws and to mitigate potential risks. Therefore, this is best classified as an AI Hazard, as the AI system's use could plausibly lead to harms such as violations of privacy rights, bias, or electoral interference, but no direct or indirect harm has yet occurred according to the article.
Thumbnail Image

DeepSeek privacy concerns raise international alarm bells

2025-01-31
Cointelegraph
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (DeepSeek) that processes user data and generates chatbot responses. The harms include violations of privacy rights through data collection and sharing without sufficient protection, which is a breach of obligations under applicable law protecting fundamental rights. The misinformation produced by the AI chatbot has already caused or contributed to harms in communities, such as electoral interference in Romania linked to similar disinformation campaigns. The national security concerns and regulatory bans further underscore the severity of the harms. The involvement of the AI system in these harms is direct and indirect, stemming from its use and data practices. Hence, this event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

DeepSeek's app becomes unavailable on Apple's and Google's app stores in Italy | TechCrunch

2025-01-29
TechCrunch
Why's our monitor labelling this an incident or hazard?
DeepSeek's app uses AI and processes personal data, which is central to the complaint and regulatory inquiry. The event concerns potential violations of data protection laws, which protect fundamental rights. While no direct harm has been reported, the regulatory scrutiny and complaint indicate a plausible risk of rights violations. Since the event is about a formal complaint and regulatory investigation without confirmed harm or incident, it fits best as Complementary Information, providing context on governance and societal responses to AI-related data privacy concerns.
Thumbnail Image

Italy's data watchdog has questions for DeepSeek

2025-01-29
قناة العربية
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (DeepSeek's AI chatbot) and concerns about its data processing practices. However, the event is about regulatory questions and investigation rather than a realized harm or incident. There is no indication that personal data misuse or privacy violations have occurred yet, only that the watchdog is seeking information due to potential risks. Therefore, this is an AI Hazard, as the situation could plausibly lead to an AI Incident if data misuse or privacy violations are confirmed, but no incident has been established at this point.
Thumbnail Image

Amid Privacy Fears, India To Store DeepSeek AI On Local Servers: Ashwini Vaishnaw

2025-01-30
TimesNow
Why's our monitor labelling this an incident or hazard?
The article discusses privacy concerns and international scrutiny over DeepSeek's data handling and storage, which involves an AI system. While these concerns relate to potential violations of privacy rights, the article does not describe any actual harm or incident caused by the AI system. The focus is on precautionary measures, such as India's decision to store the AI on local servers, and statements from officials expressing caution. This fits the definition of Complementary Information, as it provides supporting context and governance responses without reporting a new AI Incident or AI Hazard.
Thumbnail Image

Italy removes DeepSeek from Google, Apple app stores

2025-01-29
The Star
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (DeepSeek AI chatbot) and regulatory intervention due to privacy concerns, which relates to potential legal and rights issues. However, there is no indication that the AI system has directly or indirectly caused harm yet. The removal is a precautionary measure pending investigation, so it represents a plausible risk rather than a realized incident. Therefore, this event is best classified as Complementary Information, as it provides context on regulatory responses and ongoing scrutiny of AI systems without reporting a specific AI Incident or Hazard.
Thumbnail Image

DeepSeek blocked on Apple and Google app stores in Italy

2025-01-29
ThePrint
Why's our monitor labelling this an incident or hazard?
An AI system (DeepSeek AI assistant) is explicitly involved. The event stems from the use of the AI system and regulatory concerns about its compliance with data protection laws, user safeguarding, bias, and electoral interference risks. However, no actual harm or violation has been confirmed or reported yet; the regulator is investigating and has blocked the app's availability pending responses. This situation represents a plausible risk of harm (privacy violations, bias, electoral interference) but no realized harm is described. Therefore, this qualifies as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Italy regulator seeks information from DeepSeek on data protection

2025-01-28
ThePrint
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek's AI assistant) and concerns about data protection and national security, which relate to legal and rights issues. However, the article does not describe any actual harm or incident caused by the AI system, only that regulators are investigating potential issues. Therefore, this is best classified as Complementary Information, as it provides context on governance and regulatory responses to AI without reporting a specific AI Incident or AI Hazard.
Thumbnail Image

Italy's regulator blocks Chinese AI app DeepSeek on data protection

2025-01-30
ThePrint
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek chatbot) whose use raised concerns about data protection, a fundamental right. The regulator's blocking of the app and investigation indicate that the AI system's use is linked to potential violations of legal obligations protecting user data. Although no explicit harm is reported yet, the regulatory action and investigation imply that the AI system's use has led to or is likely to lead to violations of rights. Therefore, this qualifies as an AI Incident due to the breach or potential breach of data protection rights caused by the AI system's use.
Thumbnail Image

DeepSeek cannot be accessed on Apple and Google app stores in Italy

2025-01-29
ThePrint
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (DeepSeek's AI assistant) and regulatory scrutiny over its data practices, which relates to privacy and legal compliance. However, there is no indication that any harm has yet occurred or that the AI system has malfunctioned or been misused to cause harm. The event is primarily about regulatory inquiry and preventive action, not about realized harm or a direct threat of harm. Therefore, it fits the category of Complementary Information as it provides context on governance and societal response to AI-related privacy concerns.
Thumbnail Image

DeepSeek Gets Its First Strike Due To Privacy Converns, Has Been Removed From Apple's & Google's App Stores, But Not In The U.S.

2025-01-30
Wccftech
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (DeepSeek app) whose use of personal data is under regulatory scrutiny for compliance with GDPR, indicating concerns about potential violations of privacy rights (a form of human rights). The app has been removed from app stores in Italy, reflecting a response to these concerns. However, there is no indication that actual harm has occurred yet, only that there is a credible risk of harm if the issues are not resolved. The involvement of national security reviews further supports the plausibility of future harm. Thus, the event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

DeepSeek's Chatbot Was Being Used By Pentagon Employees For At Least Two Days Before The Service Was Pulled From The Network; Early Version Has Been Downloaded Since Fall 2024

2025-01-31
Wccftech
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (DeepSeek chatbot) used by Pentagon employees, which was unauthorized and led to the blocking of the service on Department of Defense networks. The AI system's privacy policy indicates data storage on Chinese servers, raising concerns about data security and compliance with GDPR and other privacy regulations. The unauthorized use on critical infrastructure (DoD networks) and the potential exposure of sensitive data represent indirect harm to security and privacy, fitting the definition of an AI Incident. The event is not merely a potential risk but involves realized unauthorized use and consequent mitigation actions, distinguishing it from an AI Hazard or Complementary Information.
Thumbnail Image

'Almost certain': Call to ban DeepSeek on government devices over China fears

2025-01-31
The Age
Why's our monitor labelling this an incident or hazard?
The event involves a generative AI system (DeepSeek) whose use is linked to potential harms including privacy violations and biased outputs that could influence democratic institutions. Although no direct harm is reported yet, the advisory indicates a high confidence that the app's use could lead to violations of rights and harm to critical infrastructure or democratic processes. Therefore, this constitutes an AI Hazard, as the AI system's use could plausibly lead to significant harms, but no actual harm has been reported at this stage.
Thumbnail Image

Will the AI superstar DeepSeek end up like TikTok? Italy already banned it

2025-01-31
Phone Arena
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system as it is described as an AI platform. The Italian data protection authority's action to ban the app due to insufficient transparency about personal data handling indicates concerns about potential violations of data protection laws and users' privacy rights. Although no direct harm is explicitly reported as having occurred, the ban and investigation are responses to plausible risks of harm to users' rights and data privacy. Since the event centers on the potential for harm and regulatory intervention before harm is realized, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

DeepSeek starts getting pulled from app stores as privacy investigations get underway

2025-01-30
BGR
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system that processes user data and generates outputs based on a reasoning model. The app's collection and storage of personal data on Chinese servers without clear legal basis or transparency has led to privacy investigations and regulatory actions, which constitute violations of applicable data protection laws and users' rights. The removal of the app from stores and official warnings reflect direct consequences of these harms. Therefore, this event qualifies as an AI Incident due to realized harm involving violations of rights and legal obligations related to personal data protection stemming from the AI system's use.
Thumbnail Image

South Korea to send inquiry to China's DeepSeek over data privacy concerns

2025-01-31
The Korea Herald
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (DeepSeek's AI services) and concerns about its data collection and use practices. However, the event is about a regulatory inquiry and potential investigation, not about an actual realized harm or incident. The concerns are about possible privacy risks, which could plausibly lead to harm if data misuse occurs, but no harm is reported as having happened yet. Therefore, this is best classified as Complementary Information, as it provides context on governance and oversight responses to AI-related privacy concerns without describing a specific AI Incident or AI Hazard.
Thumbnail Image

Deepseek banned on Google, Apple appstores in Italy - Businessday NG

2025-01-30
Businessday NG
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Deepseek) and its use in processing personal data, which is under regulatory scrutiny for compliance with GDPR. The app has been banned from app stores in Italy pending investigation, indicating potential risks but no confirmed harm yet. The mention of monitoring AI applications for election interference further supports the presence of plausible future harm. Since no direct or indirect harm has been confirmed, the event fits the definition of an AI Hazard rather than an AI Incident. It is not Complementary Information because the main focus is on the regulatory action and potential risks, not on updates or responses to a past incident. It is not Unrelated because the event clearly involves an AI system and potential harms.
Thumbnail Image

DeepSeek gets removed from Apple and Google app stores in Italy amid GDPR and privacy probe

2025-01-29
Neowin
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system (an AI-powered app with a reasoning model). The event involves its use and data processing practices being investigated for compliance with privacy laws (GDPR). Although the app has been removed from app stores in Italy, the article does not report any confirmed harm or violation yet, only ongoing investigations and regulatory scrutiny. This situation represents a plausible risk that the AI system's data handling could lead to violations of privacy rights, which fits the definition of an AI Hazard rather than an AI Incident. The event is not merely general AI news or a response update, but a credible potential for harm is present due to the regulatory probe and app removal.
Thumbnail Image

Italy's Garante Puts AI Data Practices Under Scrutiny | Law-Order

2025-01-29
Devdiscourse
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek) and concerns about its data usage practices potentially violating data protection laws, which relate to human rights. However, no actual harm or violation has been reported yet; the Garante is investigating and demanding explanations. This fits the definition of Complementary Information, as it provides context on governance and regulatory responses to AI data practices without describing a specific AI Incident or AI Hazard.
Thumbnail Image

South Korea watchdog to question DeepSeek over user data

2025-01-31
Malay Mail
Why's our monitor labelling this an incident or hazard?
The article describes regulatory scrutiny over DeepSeek's AI chatbot concerning its handling of personal data, which is an AI system. The investigations by data protection authorities indicate potential risks related to violations of data privacy and legal obligations protecting personal information. Although no actual harm or incident has been reported, the situation presents a credible risk that the AI system's use could lead to violations of rights or legal breaches. Therefore, this qualifies as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

DeepSeek AI 'pulled from Italy's app stores' amid data privacy concerns

2025-01-30
ReadWrite
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system (an AI chatbot) whose data handling practices are under investigation by multiple data protection authorities due to concerns about privacy and potential misuse of personal data. The article does not report any realized harm or confirmed violations but highlights credible regulatory concerns and potential risks related to data privacy and compliance with national intelligence laws. This fits the definition of an AI Hazard, where the AI system's use could plausibly lead to violations of rights or other harms, but no incident has yet occurred. The involvement of regulators and the investigation into data practices indicate a credible risk rather than an actual incident.
Thumbnail Image

Is Deepseek a Trojan Horse? What You Need To Know

2025-01-31
CCN - Capital & Celeb News
Why's our monitor labelling this an incident or hazard?
The article clearly involves an AI system, DeepSeek AI, which processes large datasets and generates text using natural language processing. The concerns raised relate to the use of this AI system and its data handling practices, which could plausibly lead to harms such as privacy violations, surveillance, and national security risks. However, no direct or indirect harm has been reported as having occurred yet. The article primarily discusses potential risks, regulatory investigations, and debates about data privacy and influence, fitting the definition of an AI Hazard. It does not describe a realized AI Incident or provide complementary information about a past incident or response. Therefore, the event is best classified as an AI Hazard.
Thumbnail Image

DeepSeek AI could jeopardize national security, US officials say

2025-01-30
Android Headlines
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek AI chatbot) whose use and data handling practices raise national security concerns. The article does not describe any realized harm or incident but highlights credible risks that the AI system could be used for disinformation or data exploitation by a foreign government, which could lead to harm to communities or violations of rights. Therefore, this qualifies as an AI Hazard, as the AI system's use could plausibly lead to an AI Incident in the future, but no direct or indirect harm has yet occurred.
Thumbnail Image

DeepSeek vanished from Italian App Store & Google Play Store

2025-01-30
Android Headlines
Why's our monitor labelling this an incident or hazard?
The event centers on the use of an AI system (DeepSeek) that collects extensive personal data, raising significant privacy concerns. The involvement of data protection authorities filing complaints indicates potential legal and rights-related issues. However, the article does not report any realized harm or confirmed violations yet, only the potential for such harm if data handling is improper. The app's removal from stores suggests regulatory action but does not confirm direct harm. Thus, the event fits the definition of an AI Hazard, where the AI system's use could plausibly lead to violations of rights and legal obligations, but no confirmed incident has occurred.
Thumbnail Image

DeepSeek AI's data tracking includes what you type in the app

2025-01-30
Android Headlines
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (DeepSeek AI chatbot) and discusses its use and data handling practices that could plausibly lead to violations of privacy and human rights due to data access by Chinese authorities. Although no specific harm or incident is reported, the potential for such harm is credible and significant, especially for high-risk users. Therefore, this situation fits the definition of an AI Hazard, as the development and use of the AI system could plausibly lead to an AI Incident involving violations of rights and privacy.
Thumbnail Image

DeepSeek: Why the hot new Chinese AI chatbot has big privacy and security problems

2025-01-29
Tech Xplore
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (DeepSeek chatbot) and highlights concerns about data collection, storage, and potential unauthorized access by cybercriminals, which could lead to harms such as privacy violations and targeted cyberattacks. Although no direct harm has been reported yet, the plausible risk of such harms occurring due to the AI system's operation and data practices qualifies this as an AI Hazard rather than an Incident. The focus is on potential future harms and national security risks, not on realized harm or ongoing incidents.
Thumbnail Image

AI chatbot DeepSeek vanishes from Italian app stores

2025-01-29
The Local Italy
Why's our monitor labelling this an incident or hazard?
The article describes an official inquiry into DeepSeek's AI chatbot's data processing practices, focusing on privacy and legal compliance. The AI system's involvement is clear, and the regulatory action implies potential risks to personal data privacy, which is a human rights concern. However, no actual harm or violation has been confirmed or reported yet. The removal from app stores is a precautionary measure pending investigation. This fits the definition of an AI Hazard, as the development or use of the AI system could plausibly lead to an AI Incident if data protection violations are confirmed, but no incident has yet materialized.
Thumbnail Image

Business Matters Full Broadcast (Jan. 31)

2025-01-31
NTD
Why's our monitor labelling this an incident or hazard?
The event describes the use of AI systems (chatbots) and concerns about data misuse and exposure of sensitive information, which implicates potential violations of legal and privacy rights. Since the harm is not confirmed but plausible, and investigations are ongoing, this fits the definition of an AI Hazard rather than an AI Incident. The involvement of AI in the chatbot development and the potential for harm through data exposure and national security risks justify this classification.
Thumbnail Image

DPC questions DeepSeek's data processing of Irish users

2025-01-30
Silicon Republic
Why's our monitor labelling this an incident or hazard?
The article describes regulatory scrutiny of DeepSeek's AI system's data processing practices, focusing on privacy and data protection concerns. While the AI system is clearly involved, and there are concerns about potential misuse or mishandling of personal data, the article does not report any realized harm or confirmed violations. The event is about the potential risks and regulatory responses, not an incident of harm. Therefore, it fits the definition of Complementary Information, as it provides context and updates on governance and oversight related to AI systems, rather than reporting an AI Incident or AI Hazard.
Thumbnail Image

Italy's Garante seeks clarifications from DeepSeek AI

2025-01-29
Verdict
Why's our monitor labelling this an incident or hazard?
The article describes an ongoing regulatory investigation into DeepSeek's AI model regarding its handling of personal data, which is a fundamental right protected by law. Although no actual harm has been confirmed, the concerns expressed by Italy's Garante and other authorities about data privacy and national security imply a credible risk that the AI system's use could lead to violations of rights or other harms. Therefore, this situation fits the definition of an AI Hazard, as the development and use of DeepSeek's AI system could plausibly lead to an AI Incident if the issues are not resolved.
Thumbnail Image

Italy's agency has questions for DeepSeek - Manila Standard

2025-01-29
Manila Standard
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (DeepSeek's chatbot) and concerns its development and use, specifically regarding personal data processing and privacy compliance. However, no actual harm or violation has been reported yet; the agency is seeking information and has given a deadline for response. This situation represents a plausible risk of harm (privacy violations) but no confirmed incident has occurred. Therefore, it qualifies as an AI Hazard, reflecting potential future harm related to data privacy and protection.
Thumbnail Image

Italian Regulators Demand Answers from DeepSeek on Data Privacy Concerns - Techiexpert.com

2025-01-29
Techiexpert.com
Why's our monitor labelling this an incident or hazard?
An AI system is involved as DeepSeek uses AI systems trained on data, including personal data. The event concerns the development and use of AI systems and their data practices. However, there is no indication that any harm or violation has occurred yet, only a regulatory inquiry and potential risk assessment. Therefore, this is a case of Complementary Information, as it provides context on governance and regulatory responses to AI data privacy concerns without reporting an AI Incident or AI Hazard.
Thumbnail Image

Deepseek's Impact on Personal Data Security: Concerns and Solutions

2025-01-31
RayHaber | RaillyNews
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (DeepSeek) and its use, as well as regulatory actions taken due to concerns about personal data protection and security risks. However, there is no indication that any actual harm (such as injury, rights violations, or disruption) has occurred yet. The regulatory actions and bans reflect concerns about plausible future harm, but no incident has materialized. Therefore, this event fits the definition of Complementary Information, as it provides context on societal and governance responses to AI-related data protection issues without describing a specific AI Incident or AI Hazard.
Thumbnail Image

Experts urge caution with DeepSeek AI chatbot due to China links

2025-01-30
dpa International
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI chatbot (an AI system) whose use involves collecting and processing user data. The concerns focus on potential risks to data privacy and security, including sharing data with public authorities, which could lead to violations of privacy rights or other harms. However, no direct or indirect harm has been reported yet. The article serves as a warning about plausible future harms rather than describing an actual incident. Therefore, this event fits the definition of an AI Hazard, as the development and use of DeepSeek could plausibly lead to an AI Incident involving data privacy violations or security breaches.
Thumbnail Image

Italian Data Protection Watchdog Seeks Answers from DeepSeek

2025-01-29
Digit
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek's large language model) and concerns its development and use, specifically the data collection and processing practices for training the AI. The Italian DPA's inquiry is motivated by concerns over possible violations of data protection laws (a breach of obligations under applicable law protecting fundamental rights). However, the article does not report any actual harm or violation having occurred yet, only the investigation and potential risk. Thus, it fits the definition of an AI Hazard, as the development and use of the AI system could plausibly lead to an AI Incident if data protection violations are confirmed or if personal data misuse occurs.
Thumbnail Image

Privacy on DeepSeek: Should Users Be Concerned? - Global Security Mag Online

2025-01-29
Global Security Mag Online
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system that collects and processes user data to improve its AI models. The article highlights that user data may be exposed or misused due to the platform's data storage in countries with weak privacy laws, government access to data, and censorship practices, constituting violations of privacy and potentially human rights. The recent cyberattack further demonstrates vulnerabilities that have already disrupted service and could lead to harm if sensitive data is compromised. These issues represent actual harms or breaches linked to the AI system's use and data handling, qualifying this event as an AI Incident rather than a mere hazard or complementary information.
Thumbnail Image

National Security Concerns Rise Regarding China's DeepSeek Application | Tech Biz Web

2025-01-30
TechBizWeb
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (DeepSeek) whose use has led to actual harms: a publicly exposed database containing user data and chat histories, raising privacy and security concerns. The AI's data handling practices and susceptibility to manipulation for malicious code generation represent violations of user rights and potential harm to communities. The national security responses (bans by the U.S. Navy and assessments by the White House) further confirm the recognition of these harms. Although some concerns are potential, the realized data breach and misuse risks meet the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Italy Demands Transparency from DeepSeek: GDPR Compliance Under Scrutiny

2025-01-29
るなてち
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (DeepSeek's AI models) and concerns its use and compliance with data protection laws (GDPR). However, there is no indication that any actual harm (such as privacy violations or data breaches) has occurred yet. The event is about a regulatory inquiry and potential future legal consequences if non-compliance is found. Therefore, it represents a plausible risk of harm related to AI use but no realized harm at this stage, fitting the definition of an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

DeepSeek: Why the Hot New Chinese AI Chatbot Has Big Privacy and Security Problems - Paperblog

2025-01-30
Paperblog
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (DeepSeek chatbot) and discusses its data collection and storage practices, which could plausibly lead to harms such as privacy breaches, cyberattacks, or misuse of personal data. No direct or indirect harm has been reported yet, but the concerns raised by officials and experts about national security and cybercrime risks indicate a credible potential for future harm. Hence, this fits the definition of an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

Italy Bans Access to Chinese AI App DeepSeek | Sada Elbalad

2025-01-31
see.news
Why's our monitor labelling this an incident or hazard?
The app DeepSeek is an AI system that processes personal data. The ban results from the app's refusal to clarify data handling practices, which implicates potential violations of data privacy laws. However, the article does not report any realized harm but rather a regulatory action to prevent potential harm to personal data privacy. Therefore, this is a governance response and enforcement action related to AI, fitting the category of Complementary Information rather than an AI Incident or Hazard.
Thumbnail Image

为用DeepSeek 美国防部雇员将电脑连上中国伺服器

2025-01-31
zaobao.com.sg
Why's our monitor labelling this an incident or hazard?
The event describes the use of an AI system (DeepSeek chatbot) by defense employees, involving connecting to foreign servers, which raised security concerns leading to access restrictions. While no direct harm such as injury or data breach is explicitly reported, the potential for harm to critical infrastructure security and data privacy is clear and plausible. The event is primarily about the use and potential misuse of an AI system that could lead to significant harm, thus fitting the definition of an AI Hazard rather than an Incident, as no realized harm is confirmed. It is not merely complementary information because the focus is on the potential security risks and actions taken to mitigate them, not on broader governance or research context. Therefore, the classification is AI Hazard.
Thumbnail Image

【404文库】黑噪音|如何看待意大利监管机构封禁DeepSeek?

2025-02-01
中国数字时代
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek chatbot) whose operation has led to regulatory action due to failure to comply with data protection laws, which protect fundamental rights to privacy. The blocking of the app in Italy and investigations in France and South Korea show that the AI system's use has directly or indirectly led to violations or risks of violations of legal obligations protecting personal data privacy. This fits the definition of an AI Incident as it involves harm in the form of violations of human rights and legal obligations. The article does not merely discuss potential future harm or general AI governance issues but reports concrete regulatory enforcement actions taken due to the AI system's data practices. Hence, the classification is AI Incident.
Thumbnail Image

Włochy: Urząd Ochrony Danych Osobowych zablokował chińską sztuczną inteligencję DeepSeek

2025-01-30
wnp.pl
Why's our monitor labelling this an incident or hazard?
An AI system (DeepSeek chatbot) is explicitly involved. The event stems from the use and development of this AI system and its handling of personal data. The blocking decision was made because of potential violations of data protection laws, which are legal obligations protecting fundamental rights. Although no direct harm is reported as having occurred, the authority's intervention indicates a credible risk of harm to personal data privacy and rights. Therefore, this event represents an AI Hazard, as the AI system's use could plausibly lead to violations of rights if unaddressed.
Thumbnail Image

Włochy: Urząd Ochrony Danych Osobowych zablokował chińską sztuczną inteligencję

2025-01-30
naszdziennik.pl
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek chatbot) whose use was blocked by a regulatory authority due to concerns about data protection and privacy. Although no direct harm is reported, the blocking action is a response to potential legal violations regarding personal data handling, which relates to protection of fundamental rights. Since the event describes a regulatory intervention to prevent possible harm rather than an incident where harm has already occurred, it qualifies as Complementary Information about governance and societal response to AI risks.
Thumbnail Image

Włoski urząd blokuje DeepSeek. "Brak transparentności"

2025-01-31
Business Insider Polska
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek chatbot) whose use raised concerns about personal data processing and privacy compliance. Although no direct harm is reported, the regulatory blocking is due to potential violations of legal obligations protecting fundamental rights (privacy). Since the event concerns a regulatory intervention addressing possible breaches of data protection law linked to AI use, it fits the definition of Complementary Information, as it provides governance response and enforcement context rather than reporting a realized harm or imminent hazard.
Thumbnail Image

Włochy zablokowały DeepSeek, by chronić dane obywateli

2025-01-30
rmf24.pl
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek chatbot) whose use has raised concerns about compliance with data protection regulations and the safeguarding of personal data. Although no actual harm (such as data breach or misuse) is reported, the blocking decision reflects a credible risk that the AI system's operation could lead to violations of fundamental rights related to data privacy. Therefore, this event represents an AI Hazard, as the AI system's use could plausibly lead to an AI Incident involving violations of rights if data protection is not ensured.
Thumbnail Image

DeepSeek zablokowany we Włoszech

2025-01-30
pb.pl
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (chatbots) whose data processing practices are under scrutiny for potentially violating data protection laws, which protect fundamental rights related to privacy. Although no direct harm has been reported yet, the authority's urgent blocking action indicates a credible risk that the AI systems' use could lead to violations of rights and harm to individuals' personal data privacy. Therefore, this situation constitutes an AI Hazard, as the AI systems' use could plausibly lead to an AI Incident involving violations of fundamental rights under applicable law (GDPR).
Thumbnail Image

Blokada DeepSeek w trybie pilnym. Włochy są ostrożne wobec chińskiej sztucznej inteligencji

2025-01-31
forsal.pl
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (chatbots) and concerns about personal data processing and privacy, which relates to fundamental rights under data protection laws. However, there is no indication that an actual data breach or harm has occurred yet. The blocking is a preventive regulatory measure due to insufficient information and potential risk, thus representing a plausible future harm scenario rather than a realized incident. Therefore, this qualifies as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Urząd zablokował chińską AI

2025-01-31
TVN24
Why's our monitor labelling this an incident or hazard?
An AI system (DeepSeek chatbot) is explicitly involved. The event stems from the use and development of this AI system, specifically its data processing practices. Although no direct harm has been reported yet, the regulator's decision is based on plausible risks to the privacy and security of personal data of millions of people, which could constitute a violation of fundamental rights under applicable law. Therefore, this event represents an AI Hazard, as the AI system's operation could plausibly lead to an AI Incident involving violations of data protection rights if unaddressed. There is no indication that harm has already occurred, so it is not an AI Incident. The event is not merely complementary information or unrelated news, as it concerns a regulatory blocking action due to potential harm risks.
Thumbnail Image

深度求索":另一个延伸专制统治的工具?

2025-01-28
Radio Free Asia
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek) explicitly described as a large language model performing natural language processing and data mining. The AI system is used in a way that directly leads to harm: censorship of politically sensitive content and deletion of user-generated answers, which restricts freedom of expression and access to information, violating human rights. Furthermore, the AI system collects and stores user data under policies that allow government access without consent, implicating privacy rights violations. These harms are realized and ongoing, not merely potential. Hence, the event meets the criteria for an AI Incident due to direct involvement of the AI system in causing violations of fundamental rights and harms to communities through censorship and privacy breaches.
Thumbnail Image

"深度求索":另一个延伸专制统治的工具?(图) - 时评 - 王允

2025-01-29
看中國
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system, DeepSeek, a large language model used for data mining and natural language processing. The system's use includes censorship of politically sensitive topics and collection of personal data under laws that permit government access without consent, constituting violations of human rights and privacy. These harms are realized and ongoing, as users report censorship and data security concerns. The AI system's role is pivotal in enabling these harms, fulfilling the criteria for an AI Incident. The article does not merely warn of potential harm but documents actual censorship and privacy infringements linked to the AI's deployment and use.
Thumbnail Image

「深度求索」:另一個延伸專制統治的工具?(圖) - 時評 - 王允

2025-01-29
看中國
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek) explicitly described as a large language model with data mining and natural language processing capabilities. Its use is linked to censorship and surveillance by the Chinese government, which restricts user content and collects personal data under national security laws without consent. These actions constitute violations of human rights and harm to communities by suppressing free expression and manipulating information. The harms are realized and ongoing, not merely potential, making this an AI Incident under the framework.
Thumbnail Image

第一槍:DeepSeek 在意大利下架 惟家鄉英雄式歡迎梁文鋒

2025-01-31
ار.اف.ای - RFI
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek) whose use and data handling practices have directly led to regulatory intervention due to privacy and data security concerns. The exposure of sensitive data and failure to comply with data protection regulations constitute a violation of legal obligations protecting personal rights, which fits the definition of an AI Incident. The AI system's malfunction or mismanagement has caused harm to users' data privacy and security, justifying classification as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

第一枪:DeepSeek 在意大利下架 惟家乡英雄式欢迎梁文锋

2025-01-31
ار.اف.ای - RFI
Why's our monitor labelling this an incident or hazard?
The event involves an AI system, "DeepSeek," which is explicitly described as an AI model. The Italian regulator's action to block the system is due to concerns about personal data usage and security breaches, which are direct harms related to violations of data protection laws (a breach of obligations under applicable law). The exposure of sensitive data and API keys constitutes a significant risk to user privacy and security, fulfilling the criteria for harm. The AI system's development and use have directly led to these harms, making this an AI Incident rather than a hazard or complementary information. The article also discusses social reactions but these do not change the classification.
Thumbnail Image

韓國用戶遽增引憂慮 政府著手調查「DeepSeek」 | 企業 | 個人信息 | 數據外泄 | 新唐人电视台

2025-02-01
NTDChinese
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system whose development and use involve collecting and processing personal data. The article describes governmental investigations into whether DeepSeek improperly collects and stores personal information, which could lead to violations of personal data protection laws and individuals' privacy rights. While the article does not confirm actual data breaches or harm, the ongoing investigations and international concerns indicate a credible risk of harm. Therefore, this situation qualifies as an AI Hazard, as the AI system's use could plausibly lead to violations of rights and data privacy harms.
Thumbnail Image

美国防部员工据报曾使用深度求索聊天机器人

2025-01-31
zaobao.com.sg
Why's our monitor labelling this an incident or hazard?
An AI system (DeepSeek chatbot) was used by DoD employees, and the department took action to block access due to concerns. Although no direct harm is reported, the event involves the use of an AI system whose operation could plausibly lead to harm related to security and privacy breaches in a critical infrastructure context. Therefore, this qualifies as an AI Hazard rather than an Incident, as the harm is potential, not realized.
Thumbnail Image

Kineska aplikacija pod povećalom: Jedna evropska zemlja blokirala DeepSeek, dvije vrše istragu

2025-01-31
oslobodjenje.ba
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek's AI service) whose use and data practices are under regulatory investigation for potential violations of personal data protection laws (GDPR). The Italian regulator has blocked the service due to insufficient information and potential large-scale privacy risks, indicating plausible future harm to users' privacy rights. However, the article does not describe any actual harm or confirmed violation that has already occurred. The focus is on regulatory precautionary measures and investigations, which aligns with the definition of an AI Hazard (plausible future harm). It is not Complementary Information because the main narrative is about the regulatory actions and potential risks, not about responses to a past incident. It is not Unrelated because the AI system and its potential impact on privacy are central to the event.
Thumbnail Image

Europski regulatori na nogama zbog DeepSeeka

2025-01-31
lidermedia.hr
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (DeepSeek) whose use raises concerns about personal data privacy and compliance with GDPR. The regulators' blocking and investigations are responses to potential risks but do not report any realized harm such as data breaches or privacy violations. Therefore, the event represents a plausible risk of harm (privacy violations) due to the AI system's use, qualifying it as an AI Hazard rather than an AI Incident. It is not merely complementary information because the regulatory blocking is a direct action based on potential harm, and it is not unrelated as it clearly involves an AI system and regulatory response to its risks.
Thumbnail Image

Italija blokirala DeepSeek, Francuska i Irska istražuju prijetnju za osobne podatke korisnika - Novi list

2025-01-31
Novi list
Why's our monitor labelling this an incident or hazard?
An AI system (DeepSeek) is explicitly involved, as it is an AI platform whose data processing practices are being investigated. The event stems from the use and development of the AI system, specifically regarding personal data handling and training data sourcing. However, the article does not report any actual harm or violation having occurred yet, only potential risks and regulatory concerns. Therefore, this qualifies as an AI Hazard, since the AI system's use could plausibly lead to harm related to personal data privacy violations, but no incident has been confirmed or realized at this stage.
Thumbnail Image

Europski regulatori na nogama zbog DeepSeeka: Italija ga blokirala, Francuska i Irska istražuju

2025-01-31
Zimo.co
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek) whose use of personal data has prompted regulatory investigations and a service block due to insufficient information and potential privacy risks. While the AI system's development and use raise plausible risks of harm to users' personal data privacy (a violation of rights under GDPR), no actual harm or confirmed legal breaches are reported so far. Therefore, this situation constitutes an AI Hazard, as the AI system's use could plausibly lead to an AI Incident involving privacy violations, but such harm has not yet been established or confirmed.
Thumbnail Image

Italija blokirala DeepSeek. Istrage otvorili Francuska i Irska

2025-01-31
IndexHR
Why's our monitor labelling this an incident or hazard?
An AI system (DeepSeek) is explicitly involved, as it is an AI-based service processing personal data. The regulators' actions stem from concerns about the AI system's use and potential misuse of personal data, which could lead to violations of privacy rights (a form of harm under human rights and legal obligations). Since no actual harm or data breach is reported, only investigations and precautionary blocking, the event fits the definition of an AI Hazard rather than an AI Incident. The article focuses on regulatory responses and potential risks, not on realized harm or incidents.
Thumbnail Image

Europski regulatori na nogama zbog DeepSeeka

2025-01-31
Poslovni Dnevnik
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek) whose use of personal data is being investigated by regulators for potential privacy risks. However, the article does not report any realized harm or confirmed violation of rights; rather, it focuses on regulatory measures, information requests, and investigations. Therefore, this is best classified as Complementary Information, as it provides updates on governance and societal responses to potential AI-related risks without describing a concrete AI Incident or an imminent AI Hazard.
Thumbnail Image

EVROPA NA NOGAMA: Italija, Francuska i Irska blokirale kineski AI pretraživač - 'Postoji veliki rizik!'

2025-01-31
slobodna-bosna.ba
Why's our monitor labelling this an incident or hazard?
The article details regulatory interventions prompted by concerns over the AI system's handling of personal data and compliance with privacy laws. While the AI system is clearly involved, the harms (privacy violations) are not confirmed to have occurred but are plausible if the system's data practices are improper. The blocking and investigations are precautionary measures addressing these potential risks. Hence, this event fits the definition of an AI Hazard, as it involves plausible future harm related to the AI system's use and data processing, but no direct or indirect harm has yet materialized.
Thumbnail Image

Europa na nogama: Nakon Italije, kineski DeepSeek pod povećalom i u Francuskoj i Irskoj

2025-01-31
tportal.hr
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek's AI-based service) and concerns about the use of personal data, which relates to potential violations of data protection laws and fundamental rights. The Italian regulator's blocking of the service and investigation indicate a serious concern about possible harm. However, no actual harm or incident has been reported; the focus is on the potential risk and regulatory response. Therefore, this qualifies as an AI Hazard, as the AI system's use could plausibly lead to violations of rights or harm to users' data privacy, but no incident has yet occurred.
Thumbnail Image

Regulador italiano ordena bloqueio "com efeitos imediatos" de DeepSeek

2025-01-31
Notícias ao Minuto
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek language model) whose operation and data handling practices are under regulatory scrutiny due to insufficient transparency and potential non-compliance with data protection laws. However, the article does not report any realized harm such as injury, rights violations, or other direct damages caused by the AI system. Instead, it describes regulatory actions and an ongoing investigation to prevent possible harm related to data privacy and legal compliance. Therefore, this event is best classified as Complementary Information, as it provides important context and updates on governance and regulatory responses to AI use, without describing a concrete AI Incident or an imminent AI Hazard.
Thumbnail Image

Regulador italiano ordena bloqueio "urgente e com efeitos imediatos" de DeepSeek

2025-01-31
Expresso
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek, a language model) whose use and data processing practices are under regulatory scrutiny due to insufficient transparency and potential non-compliance with data protection laws. Although no direct harm is explicitly reported, the regulator's urgent blocking order and inquiry indicate a credible risk of violation of fundamental rights related to personal data protection. This constitutes an AI Hazard because the AI system's development and use could plausibly lead to violations of rights and harms to users' privacy if unaddressed. There is no indication that harm has already occurred, so it is not an AI Incident. The event is not merely complementary information or unrelated news, as it concerns regulatory action directly linked to the AI system's operation and potential risks.
Thumbnail Image

Regulador italiano ordena bloqueio "urgente e com efeitos imediatos" da DeepSeek

2025-01-31
PÚBLICO
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek, a language model) whose use and data practices are under regulatory scrutiny due to insufficient transparency about training data and personal data handling. Although no direct harm has been reported, the regulator's urgent blocking and investigation indicate a credible risk of violations of data protection rights (a form of human rights violation). Since the harm is potential and the event centers on preventing possible violations, this qualifies as an AI Hazard rather than an AI Incident. The article does not describe realized harm but focuses on regulatory action to prevent it.
Thumbnail Image

Regulador italiano ordena bloqueio "urgente e com efeitos imediatos" da aplicação DeepSeek

2025-01-31
SIC Notícias
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek language model) whose development and use raised concerns about data privacy and compliance with regulations. The Italian regulator's action to block the app and investigate indicates a response to potential violations of data protection laws, which relate to fundamental rights. However, the article does not report any realized harm such as injury, rights violations already occurring, or other direct damages. Instead, it describes regulatory intervention to prevent or address potential legal breaches and protect user data. Therefore, this event is best classified as Complementary Information, as it provides an update on governance and regulatory responses to AI use, without describing a concrete AI Incident or a plausible future harm scenario (AI Hazard).
Thumbnail Image

Regulador italiano ordena bloqueio "urgente e com efeitos imediatos" da aplicação DeepSeek - SAPO.pt - Última hora e notícias de hoje atualizadas ao minuto

2025-01-31
SAPO
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek language model) whose development and use include processing personal data. The Italian regulator's intervention due to lack of transparency and insufficient information about data collection and training indicates potential violations of data protection laws and risks to users' privacy. While no direct harm has been reported, the blocking and investigation reflect credible concerns that the AI system's operation could lead to harm, such as violations of privacy rights. Therefore, this qualifies as an AI Hazard, as the event plausibly could lead to an AI Incident if unaddressed.
Thumbnail Image

DeepSeek bloqueado em Itália, regulador abre investigação

2025-01-31
euronews
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (DeepSeek chatbot) and describes regulatory action taken to restrict its operation and investigate potential data protection issues. There is no indication that harm has already occurred, only that the regulator is acting to prevent possible violations of user data rights. This fits the definition of Complementary Information, as it details a governance response to concerns about an AI system, rather than reporting an AI Incident (harm realized) or AI Hazard (plausible future harm).
Thumbnail Image

Alasan Italia dan Angkatan Laut Amerika Serikat Larang Penggunaan DeepSeek

2025-02-03
Tempo
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system processing personal data to train its models. Italy's data protection authority halted its data processing due to non-compliance with GDPR, indicating a violation of legal rights related to personal data privacy. The US Navy's ban based on security and ethical concerns further indicates recognized harms or risks linked to the AI system's use. These regulatory bans are responses to actual or ongoing harms related to privacy violations and security risks, fulfilling the criteria for an AI Incident. The event is not merely a potential risk (hazard) or complementary information but a concrete incident involving harm and regulatory action.
Thumbnail Image

Italia Blokir DeepSeek untuk Melindungi Data Pribadi Pengguna

2025-01-31
Liputan 6
Why's our monitor labelling this an incident or hazard?
An AI system (DeepSeek, an AI model) is explicitly involved. The event stems from the use and deployment of this AI system. Although no direct harm is reported yet, the blocking action is a preventive measure due to concerns about potential violations of data privacy rights, which are fundamental rights. Since the harm is plausible but not confirmed or realized, this constitutes an AI Hazard rather than an AI Incident. The event is not merely complementary information because it reports a concrete regulatory action based on potential harm, nor is it unrelated.
Thumbnail Image

Ratusan Perusahaan Global Ramai-ramai Blokir AI China DeepSeek karena khawatir Kebocoran Data

2025-02-01
beritasatu.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (DeepSeek chatbot) and concerns about data leakage to the Chinese government, which could violate user privacy and data protection rights. The blocking actions by companies indicate recognition of a credible risk, but no actual incident of data leakage or harm has been reported. Hence, the event is best classified as an AI Hazard, reflecting plausible future harm rather than realized harm. It is not Complementary Information because the main focus is on the risk and preventive actions, not on updates or responses to a past incident. It is not an AI Incident because no harm has yet occurred.
Thumbnail Image

Sejumlah Negara Mulai Pertanyakan DeepSeek Soal Data Penggunanya

2025-01-31
Hidayatullah.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (the R1 chatbot) and concerns about its data handling practices, which relate to fundamental rights (data privacy). However, there is no direct or indirect evidence of harm having occurred yet, only investigations and regulatory actions. This fits the definition of Complementary Information, as it details governance responses and societal scrutiny of AI systems, enhancing understanding of AI ecosystem risks without reporting a concrete AI Incident or plausible imminent AI Hazard.
Thumbnail Image

Pentagon Blokir DeepSeek Usai Data Pegawainya Menyangkut di Server China

2025-01-31
Bisnis.com
Why's our monitor labelling this an incident or hazard?
The article describes the use of an AI system (DeepSeek) and the resulting regulatory and organizational responses due to concerns about data privacy and potential intelligence risks. There is no report of direct harm or violation caused by the AI system's malfunction or misuse, only potential risks and regulatory scrutiny. The Pentagon's blocking of the AI system and the data protection authorities' inquiries are governance and societal responses to these concerns. Hence, the event does not meet the criteria for an AI Incident (no direct or indirect harm realized) nor an AI Hazard (no explicit plausible future harm described beyond regulatory concerns). It is not unrelated, as it involves an AI system and its implications. Therefore, the classification is Complementary Information.
Thumbnail Image

Italia Tutup Akses AI DeepSeek Buatan China, Takut Keboboblan?

2025-01-31
jpnn.com
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (DeepSeek AI chatbot) and concerns about data privacy and legal compliance, which relate to human rights protection. The Italian authority's action is a preventive measure to avoid potential violations of data protection rights. Since no actual harm or violation has been reported as having occurred yet, but there is a credible risk of such harm if the AI system continued operating without restrictions, this situation qualifies as an AI Hazard. It is not an AI Incident because no realized harm is described, nor is it Complementary Information or Unrelated, as the focus is on a specific AI system and regulatory response to potential harm.
Thumbnail Image

Ratusan Perusahaan Dunia Blokir Deepseek Karena Potensi Kebocoran Data - Beritaja

2025-02-01
Beritaja.com
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (DeepSeek chatbot) whose use is being restricted due to concerns about potential data leakage to a government, which could violate privacy and data protection rights. No actual data breach or harm has been reported yet, only the potential risk. This fits the definition of an AI Hazard, where the AI system's use could plausibly lead to an AI Incident (harm). The blocking actions by companies are preventive measures in response to this credible risk. Hence, the event is not an AI Incident (no realized harm), nor Complementary Information or Unrelated.
Thumbnail Image

Amerika Khawatir DeepSeek Mengirim dan Menyalahgunakan Data Pengguna ke China

2025-02-01
SINDOnews Tekno
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI generative platform explicitly mentioned as collecting user data and censoring content, which directly implicates the AI system's use leading to privacy and censorship harms. These harms fall under violations of human rights and harm to communities. The article reports these harms as occurring, not just potential, and the AI system's role is pivotal. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Italia Tutup Akses Ke Deepseek Karena Masalah Privasi - Beritaja

2025-01-31
Beritaja.com
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (DeepSeek chatbot) whose use and data collection practices have raised privacy concerns. The Italian authority's action to block access and investigate indicates a credible risk that the AI system's operation could lead to violations of privacy rights, a form of harm under the framework. Since no actual harm or incident is reported yet, but the risk is credible and regulatory intervention is underway, the event fits the definition of an AI Hazard. It is not Complementary Information because the main focus is on the regulatory action and potential harm, not on updates or responses to a past incident. It is not unrelated because AI is central to the event.
Thumbnail Image

Italia tutup akses ke DeepSeek karena masalah privasi

2025-01-31
Antara News
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek chatbot) whose use has led to concerns about violations of data privacy rights, prompting regulatory action including blocking access and formal investigation. The AI system's development and use have directly led to a breach of obligations under applicable law protecting fundamental rights (privacy). This fits the definition of an AI Incident because the AI system's use has caused or is causing harm related to rights violations. The regulatory response and investigation confirm the seriousness and realization of harm rather than a mere potential risk or complementary information.
Thumbnail Image

DeepSeek Diblokir di Italia, Khawatir Data Pengguna Dimanfaatkan Intelijen China |Republika Online

2025-01-31
Republika Online
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system whose use has raised serious concerns about data privacy and potential misuse by foreign intelligence, which implicates violations of fundamental rights and legal obligations (GDPR). The Italian government has blocked the app and launched investigations, indicating a credible risk of harm to users' privacy and rights. Additionally, OpenAI's claim of unauthorized distillation of their AI model suggests potential intellectual property rights violations. However, the article does not report actual realized harm or incidents caused by DeepSeek's operation, only potential and ongoing investigations. Hence, the event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

4 Teknologi Cina yang Bikin Amerika Ketar-ketir: DeepSeek, TikTok, Huawei, BYD - Teknologi Katadata.co.id

2025-01-31
katadata.co.id
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (DeepSeek's AI model, TikTok's recommendation algorithms, Huawei's advanced chips likely involving AI, and BYD's smart vehicles with integrated AI-related software/hardware). It details ongoing investigations and bans due to security, ethical, and data privacy concerns, indicating realized or direct harms (e.g., national security threats, unauthorized data access). The US government's active regulatory and investigative responses confirm the presence of harms or credible risks. Hence, the event qualifies as an AI Incident because the AI systems' development and use have directly or indirectly led to harms or violations, not merely potential future risks or general AI news.
Thumbnail Image

Bikin Saham Teknologi AS Anjlok, DeepSeek Terancam Diblokir Seperti TikTok - Teknologi Katadata.co.id

2025-01-31
katadata.co.id
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system (a large language model) whose use has directly led to concerns about privacy violations, data security risks, and potential disinformation campaigns, which are harms to communities and violations of rights. The US Navy's ban and ongoing investigations indicate recognized harms or risks that have materialized or are imminent. The article describes actual impacts on stock markets and security responses, indicating realized harm rather than just potential. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Italia Blokir AI China DeepSeek untuk Lindungi Data Pengguna

2025-01-31
beritasatu.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek chatbot) whose use has raised legal and rights concerns, specifically regarding personal data protection and transparency. The blocking and investigation indicate that the AI system's use has led to a breach or potential breach of data protection laws, which are designed to protect fundamental rights. Therefore, this constitutes an AI Incident due to violations of applicable law intended to protect fundamental rights. The harm is realized in the form of legal non-compliance and potential privacy violations, prompting regulatory action.
Thumbnail Image

Aplikasi DeepSeek Hilang di Italia, Buntut Penyelidikan Soal Keamanan Data

2025-01-31
Liputan 6
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (DeepSeek) that processes user data and uses web scraping, which fits the definition of an AI system. The event stems from the use of this AI system and concerns about its compliance with data protection laws (GDPR), which protect fundamental rights. However, the article does not report any actual harm or violation having occurred yet, only a regulatory investigation and app removal as precautionary measures. Therefore, no direct or indirect harm has been confirmed. The event is primarily about the regulatory and societal response to potential AI-related data privacy issues, making it Complementary Information. It is not an AI Hazard because the risk is not hypothetical but under active investigation, and not an AI Incident because no harm is confirmed.
Thumbnail Image

OpenAI Curigai DeepSeek Curi Teknologi AI Milik AS, Apa Benar?

2025-01-30
Liputan 6
Why's our monitor labelling this an incident or hazard?
The article involves AI systems explicitly, focusing on the use and misuse of AI model distillation. The event stems from the use and development of AI systems and concerns potential violations of intellectual property rights. However, no direct or indirect harm has been reported as having occurred yet. The focus is on investigation and prevention, which aligns with providing complementary information about responses to potential AI-related risks rather than reporting an incident or hazard. Therefore, this is best classified as Complementary Information.
Thumbnail Image

Departemen Pertahanan AS Blokir DeepSeek usai Karyawan Terhubung ke Server Tiongkok

2025-01-31
Liputan 6
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system whose use by Pentagon employees led to connections to servers in China, where data is governed by laws that may require cooperation with intelligence agencies. This creates a credible risk of harm related to data security and potential breaches of confidentiality or espionage, which could impact national security. Although no actual harm or incident is reported, the blocking action by the Department of Defense reflects recognition of this plausible future harm. Hence, the event is best classified as an AI Hazard due to the plausible risk of harm from the AI system's use and data handling practices.
Thumbnail Image

Saat DeepSeek Ditanya Isu Uyghur dan Taiwan, Jawabannya Tak Terduga, Ahli Siber Amerika Khawatir - Tribunnewsbogor.com

2025-01-31
Tribunnews Bogor
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system (a chatbot) that is explicitly mentioned. Its use involves responding to user queries, including sensitive topics. The AI's refusal to answer questions about Uyghur human rights abuses, Taiwan sovereignty, Tiananmen, criticism of Xi Jinping, and censorship in China demonstrates a form of content censorship, which can be considered a violation of the right to access information, a human right. Since this behavior is occurring and is systemic in the AI's responses, it constitutes an AI Incident under the definition of violations of human rights or breach of obligations intended to protect fundamental rights. The event reports actual behavior of the AI system causing this harm, not just a potential risk or a complementary update.
Thumbnail Image

Khawatir Masalah Data, Ratusan Perusahaan Blokir DeepSeek : Okezone Techno

2025-02-01
https://techno.okezone.com/
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (DeepSeek chatbot) and describes actions taken by companies and government agencies to block it due to concerns about data privacy and potential unauthorized data sharing with the Chinese government. This represents a plausible risk of harm (violation of privacy and security) stemming from the AI system's use. Since no actual harm or incident is reported, but credible concerns and preventive measures are described, the event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

AS Selidiki Kecurigaan DeepSeek Pakai Chip Nvidia Terlarang

2025-02-03
detiki net
Why's our monitor labelling this an incident or hazard?
The article describes an ongoing investigation into whether DeepSeek illegally accessed Nvidia AI chips banned for export to China. The AI system's development and use involve these chips, which are critical for high-performance AI models. The potential harm includes violation of export control laws and possible strategic harm to US technological advantage. However, no direct harm or incident has been reported yet; the event is about a plausible risk and regulatory concern. Hence, it fits the definition of an AI Hazard, as the AI system's use of banned chips could plausibly lead to significant harms, but no harm has yet materialized.
Thumbnail Image

Ramai-ramai Jauhi DeepSeek Karena Masalah Keamanan

2025-02-03
detiki net
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system that processes user inputs and interactions to train its models, involving sensitive data collection and storage. The event details actual data exposure (a security breach) and widespread organizational bans due to privacy and security concerns, indicating realized harm related to data privacy and potential violations of user rights. These harms fall under violations of rights and harm to communities due to privacy breaches and risks of unauthorized data access. Therefore, this qualifies as an AI Incident because the AI system's use and data management have directly led to significant harm and organizational restrictions. The event is not merely a potential risk (hazard) or a complementary update but a concrete incident involving harm and response.
Thumbnail Image

Apa yang Membuat DeepSeek AI China Jadi Heboh? AS Sampai Ketar-ketir

2025-01-30
detiki net
Why's our monitor labelling this an incident or hazard?
The article describes an AI system (DeepSeek's LLM) and its development and deployment, including potential security and privacy concerns due to data transmission to China and government censorship compliance. However, it does not describe any direct or indirect harm caused by the AI system, nor does it report any incident or event where harm has occurred or is imminent. The concerns are more about potential risks and geopolitical competition rather than a specific AI incident or hazard. Therefore, this is best classified as Complementary Information, providing context and updates on AI ecosystem developments and responses rather than reporting an AI Incident or AI Hazard.
Thumbnail Image

NASA Larang Pegawai Pakai DeepSeek, Ini Alasannya

2025-02-02
detiki net
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek) whose use is prohibited by government agencies due to concerns about security and privacy risks. However, there is no indication that any actual harm has occurred yet. The ban is a preventive action reflecting plausible future risks from the AI system's use within sensitive government environments. Therefore, this event qualifies as an AI Hazard because it plausibly could lead to harm but no harm has been reported or realized at this time.
Thumbnail Image

Bikin Heboh Dunia, DeepSeek Diduga Contek Model OpenaAI

2025-01-31
CNNindonesia
Why's our monitor labelling this an incident or hazard?
The article involves AI systems explicitly (OpenAI's and DeepSeek's AI models) and discusses the use and development of these systems, specifically the alleged unauthorized distillation of OpenAI's models by DeepSeek. This relates to potential violations of intellectual property rights, which is a recognized harm under the AI Incident definition. However, since the article only reports allegations and investigations without confirmed harm or legal outcomes, it does not meet the threshold for an AI Incident. Instead, it represents a plausible risk of harm (intellectual property violation and competitive harm) that could lead to an AI Incident if confirmed. Therefore, it fits best as an AI Hazard, indicating a credible potential for harm due to the AI system's use and development.
Thumbnail Image

DeepSeek Bikin Kacau, China Siap-siap Kena Tamparan Keras

2025-02-03
CNBCindonesia
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (DeepSeek's AI assistant) and its impact on the technology market and geopolitical strategy. However, it does not report any injury, rights violation, infrastructure disruption, or other harms caused by the AI system's development, use, or malfunction. The concerns are about competitive advantage and economic impact, which do not meet the criteria for AI Incident or AI Hazard. The article mainly provides context on policy discussions and market reactions, which fits the definition of Complementary Information.
Thumbnail Image

Aplikasi DeepSeek Mendadak Diblokir, Pemerintah Buka Suara

2025-01-30
CNBCindonesia
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (DeepSeek) whose use is under regulatory scrutiny for possible violations of data protection laws and risks of bias and election interference. While these concerns relate to potential violations of rights and harm to communities, the article does not report any realized harm or incident caused by the AI system. Instead, it describes a regulatory response and investigation to prevent such harms. Therefore, this event fits the definition of an AI Hazard, as the AI system's use could plausibly lead to an AI Incident if issues are confirmed, but no incident has yet occurred.
Thumbnail Image

Ratusan perusahaan dunia blokir DeepSeek karena potensi kebocoran data

2025-02-01
ANTARA News - The Indonesian News Agency
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek chatbot) whose use is being restricted due to concerns about potential data leakage to a government, which could violate privacy and data protection rights. Since no actual data breach or harm has been reported, but the risk is credible and has led to preventive blocking, this fits the definition of an AI Hazard. It is not an AI Incident because harm has not materialized, nor is it Complementary Information or Unrelated, as the focus is on the potential harm from the AI system's use.
Thumbnail Image

Jepang buat UU untuk mengurangi bahaya dan risiko yang ditimbulkan AI

2025-01-31
ANTARA News - The Indonesian News Agency
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (e.g., DeepSeek, a large language model-based chatbot) and discusses the potential risks and harms of AI, but no actual harm or incident caused by AI is described. The article centers on the intention to create laws to manage AI risks, which is a governance response. Therefore, this is Complementary Information as it provides context and updates on societal and governance responses to AI-related risks without reporting a specific AI Incident or AI Hazard.
Thumbnail Image

NASA Larang Pegawainya Pakai DeepSeek Karena Dianggap Mengancam Keamanan AS - Teknologi Katadata.co.id

2025-02-02
katadata.co.id
Why's our monitor labelling this an incident or hazard?
The article describes a government ban on the use of an AI system (DeepSeek) due to concerns about potential security risks, but does not report any realized harm or incident caused by the AI system. The involvement of the AI system is clear, and the concerns relate to plausible future harm (data breaches, national security threats) if the AI system were used. Therefore, this event fits the definition of an AI Hazard, as it plausibly could lead to harm but no harm has yet occurred or been reported.
Thumbnail Image

Pemerintah Korsel Minta Klarifikasi Dari DeepSeek Terkait Pengelolaan Data Pribadi

2025-02-03
KBS WORLD Radio
Why's our monitor labelling this an incident or hazard?
The article describes a regulatory inquiry into DeepSeek's AI system's data management practices due to concerns about possible personal data leaks. However, no actual data breach or harm has been confirmed or reported yet. The event reflects a plausible risk of harm related to personal data privacy but does not document a realized incident. Therefore, it qualifies as an AI Hazard, as the AI system's use could plausibly lead to an AI Incident involving personal data leakage, but no direct or indirect harm has yet occurred.
Thumbnail Image

Sensor Jawaban DeepSeek dan Bias dalam Kecerdasan Buatan

2025-01-31
Kompas.id
Why's our monitor labelling this an incident or hazard?
The article centers on the identification and discussion of bias in an AI system (DeepSeek) and the general problem of bias in AI. It does not report a concrete event where the AI's biased behavior caused actual harm or a specific incident of malfunction or misuse leading to harm. The discussion is more about the nature of bias, its sources, and the need for interdisciplinary approaches to mitigate it. Therefore, this fits the definition of Complementary Information, as it provides supporting context and understanding about AI bias without reporting a new AI Incident or AI Hazard.
Thumbnail Image

Italia Tutup Akses ke Teknologi AI Viral Asal China DeepSeek

2025-01-31
investor.id
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (DeepSeek's AGI chatbot) and its use involving personal data collection. The Italian authority's actions are precautionary and regulatory, aiming to prevent potential privacy violations. There is no indication that harm has already occurred, only that the AI system's use could plausibly lead to violations of data protection and privacy rights. Therefore, this event fits the definition of an AI Hazard, as it concerns plausible future harm and regulatory intervention to prevent an AI Incident.
Thumbnail Image

OpenAI Tuding DeepSeek Pakai Model Miliknya untuk Latih AI

2025-01-30
KOMPAS.com
Why's our monitor labelling this an incident or hazard?
The article describes an event where DeepSeek allegedly used OpenAI's AI model outputs and data illegally to train its own AI model, constituting a violation of intellectual property rights, a recognized harm under AI Incidents. The involvement of AI systems is clear, and the harm is not just potential but currently under investigation, implying realized or ongoing harm. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Pemerintah AS Curiga DeepSeek Mencontek ChatGPT

2025-01-31
KOMPAS.com
Why's our monitor labelling this an incident or hazard?
The article describes a situation where DeepSeek allegedly uses a method (distillation) to extract knowledge from OpenAI's ChatGPT model, which OpenAI prohibits. This raises concerns about intellectual property rights and unauthorized use of AI technology. Although these concerns relate to potential violations of intellectual property rights (a form of harm under AI Incident definitions), the article does not confirm that such violations have been legally established or that harm has materialized. The focus is on suspicion, investigation, and potential future legal actions rather than confirmed incidents. Therefore, this event is best classified as Complementary Information, as it provides context and updates on ongoing AI ecosystem developments and governance issues without reporting a confirmed AI Incident or AI Hazard.
Thumbnail Image

Deretan Negara yang Membatasi dan Memblokir DeepSeek

2025-01-31
KOMPAS.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek chatbot) whose use by government personnel has raised security and ethical concerns. The governments' actions to block or restrict access indicate a credible risk of harm, especially to national security, even though no actual harm is reported yet. The article focuses on the potential risks and preventive responses rather than describing realized harm. Hence, it fits the definition of an AI Hazard, where the AI system's use could plausibly lead to an AI Incident if not controlled.
Thumbnail Image

AI Canggih Buatan China Goncang Dunia, Amerika Larang Tentaranya Gunakan DeepSeek

2025-01-30
Hidayatullah.com
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (DeepSeek) and its use, with the U.S. military banning its use due to security and ethical concerns. However, there is no indication that the AI system has directly or indirectly caused any harm yet. The concerns and ban reflect plausible future risks rather than realized incidents. Therefore, this event qualifies as an AI Hazard because it plausibly could lead to harm related to security and ethics, but no actual harm has been reported.
Thumbnail Image

Akun Terkait Pemerintah China Promosikan DeepSeek AI Sebelum Kejatuhan Saham Teknologi AS

2025-02-03
VOI - Waktunya Merevolusi Pemberitaan
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (DeepSeek) and its promotion by state-linked actors, which is relevant to AI developments and geopolitical competition. However, it does not report any direct or indirect harm caused by the AI system's development, use, or malfunction. The stock market impact is economic and indirect, not a direct AI Incident as defined. The ongoing investigations and allegations suggest potential future risks but do not confirm harm yet. Thus, the event is best classified as Complementary Information, providing context on AI ecosystem developments, geopolitical narratives, and market reactions without describing a specific AI Incident or Hazard.
Thumbnail Image

Pengawas Privasi Korea Selatan Akan Gali Sistem Pengelolaan Informasi DeepSeek

2025-02-03
VOI - Waktunya Merevolusi Pemberitaan
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek AI application) and concerns its use and data management practices. However, the article does not report any realized harm or incident caused by the AI system. Instead, it details ongoing investigations and potential regulatory actions, which are responses to concerns about privacy and data protection. This fits the definition of Complementary Information, as it provides context and updates on governance and societal responses to AI-related privacy issues without describing a specific AI Incident or AI Hazard.
Thumbnail Image

Kerajaan kaji impak platform AI DeepSeek kepada Malaysia

2025-02-02
Perbaiki diri, bukan sekadar pesta
Why's our monitor labelling this an incident or hazard?
The article discusses the government's proactive evaluation of a new AI platform's potential impact and adaptation strategies. There is no indication of any realized harm, malfunction, or misuse related to DeepSeek. The content is about monitoring and preparing for AI developments rather than reporting an incident or hazard. Therefore, it fits the category of Complementary Information, providing context and updates on AI ecosystem developments without describing an AI Incident or AI Hazard.
Thumbnail Image

Kerajaan kaji impak platform AI DeepSeek kepada Malaysia - Gobind Singh | Berita Harian

2025-02-02
Berita Harian
Why's our monitor labelling this an incident or hazard?
The article does not describe any realized harm or incident caused by the AI system DeepSeek, nor does it indicate any plausible immediate risk of harm. It is a report on governmental review and consideration of an AI technology's impact and potential adaptation, which fits the definition of Complementary Information as it provides context and updates on AI developments and governance responses without reporting an AI Incident or AI Hazard.
Thumbnail Image

Jepang Buat Uu Untuk Mengurangi Bahaya Dan Risiko Yang Ditimbulkan Ai - Beritaja

2025-01-31
Beritaja.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (e.g., DeepSeek, a generative AI chatbot) and discusses the development of laws to reduce AI-related risks. However, no direct or indirect harm has occurred yet, nor is there an incident described. The article centers on a governmental response to potential AI risks, which fits the definition of Complementary Information as it provides context on governance and societal responses to AI hazards rather than reporting a new incident or hazard itself.
Thumbnail Image

Chatbot AI DeepSeek Diblokir di Italia untuk Lindungi Data Pengguna

2025-01-31
VOI - Waktunya Merevolusi Pemberitaan
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI chatbot system whose use involves processing personal data. The Italian data protection authority blocked the app due to insufficient transparency about data collection and storage, raising concerns about privacy violations. Additionally, a data leak exposing sensitive user chat histories constitutes harm to users' rights and privacy. These factors indicate that the AI system's use and malfunction have directly or indirectly led to violations of data protection rights, qualifying this event as an AI Incident.
Thumbnail Image

DeepSeek cetus kontroversi, Korea Selatan siasat pengurusan data

2025-01-31
Perbaiki diri, bukan sekadar pesta
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek's chatbot R1) whose development and use have raised concerns about personal data management and compliance with privacy laws. While no direct harm or incident has been reported, the regulatory scrutiny and investigations reflect a plausible risk of violations of data protection rights, which could lead to harm if not addressed. Therefore, this situation fits the definition of an AI Hazard, as the AI system's use could plausibly lead to an AI Incident involving violations of fundamental rights related to personal data protection.
Thumbnail Image

DeepSeek curi data OpenAI?

2025-01-31
Sinar Harian
Why's our monitor labelling this an incident or hazard?
The event involves the use and development of AI systems (LLMs) and allegations of unauthorized data use that could violate intellectual property rights, which is a recognized harm under the AI Incident definition. However, since the article only reports allegations and ongoing investigations without confirmed violations or realized harm, it does not yet meet the threshold for an AI Incident. The potential for harm and the ongoing scrutiny make it an AI Hazard, as the development and use of AI systems in this manner could plausibly lead to violations of rights and other harms if confirmed.
Thumbnail Image

中国発の生成AI「DeepSeek」の脆弱性 Wallarmがジェイルブレークに成功

2025-02-04
ITmedia ビジネスオンライン
Why's our monitor labelling this an incident or hazard?
The event involves the use and malfunction of an AI system (DeepSeek) where security flaws have been exploited to bypass safeguards and access confidential information. This directly relates to potential violations of data confidentiality and compliance, which are harms under the framework. While the immediate harm may be limited due to the fix, the article describes realized vulnerabilities and risks that have materialized, qualifying it as an AI Incident rather than a mere hazard or complementary information. The involvement of AI is explicit, and the harm relates to breaches of data security and compliance obligations.
Thumbnail Image

多くの国がDeepSeekに使用制限、日本は... -- 中国メディア (2025年2月3日) - エキサイトニュース

2025-02-03
エキサイトニュース
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI model whose use is being restricted or investigated by various countries due to concerns about personal data protection and national security risks. These actions stem from the use and potential misuse of the AI system, which could lead to violations of privacy rights and possibly broader security harms. Although no direct harm is reported as having occurred yet, the regulatory responses and warnings indicate plausible risks of harm. Therefore, this situation qualifies as an AI Hazard because the AI system's use could plausibly lead to incidents involving harm to rights or security, but no confirmed incident of harm is described in the article.
Thumbnail Image

中国AI企業ディープシーク、米オープンAIのデータ『蒸留』疑惑が浮上

2025-02-04
Newsweek日本版
Why's our monitor labelling this an incident or hazard?
The article describes a case where an AI company allegedly used OpenAI's data without authorization to train its own AI models, constituting a violation of intellectual property rights, a recognized harm under the framework. The misuse of AI data directly relates to the development and use of AI systems. Additionally, the involvement of national security concerns underscores the seriousness of the harm. Since the harm is realized (data misuse has occurred) and involves AI system development and use, this is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

中国AI「DeepSeek」、各国・地域で利用制限広がる 中国政府への情報漏えいに懸念

2025-02-04
ITmedia ビジネスオンライン
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek, a generative AI) whose use is being restricted by multiple governments due to concerns about information leakage and security risks. These concerns reflect plausible future harms related to privacy violations and national security, fitting the definition of an AI Hazard. There is no indication that actual harm has occurred yet, so it is not an AI Incident. The article focuses on the potential risks and governmental responses, not on a realized incident or a complementary update to a past incident.
Thumbnail Image

DeepSeek、チャット内容が漏洩していた可能性あり。利用する前に知っておいたほうがいいこと | ライフハッカー・ジャパン

2025-02-04
ライフハッカー・ジャパン
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system that processes user chat inputs and generates outputs, thus qualifying as an AI system. The incident involves the use and malfunction (security misconfiguration) of this AI system leading to the exposure of sensitive user data, including chat histories and secret keys. This exposure constitutes a violation of user rights and privacy, which falls under harm category (c) - violations of human rights or breach of obligations under applicable law protecting fundamental rights. Therefore, this event qualifies as an AI Incident because the AI system's malfunction directly led to harm through data leakage.
Thumbnail Image

DeepSeekのAI、「攻撃成功率100%」:シスコなどのセキュリティ研究結果

2025-02-03
WIRED.jp
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek's large language model) whose use has directly led to harm by enabling the generation and potential spread of harmful content. The security researchers' tests demonstrate that the AI system's defenses are insufficient, allowing malicious actors to exploit it to produce dangerous outputs. This meets the criteria for an AI Incident because the AI system's malfunction or inadequate safeguards have directly caused harm to communities through the generation of harmful content. The article does not merely warn of potential harm but documents realized vulnerabilities and exploitation, thus it is not an AI Hazard or Complementary Information.
Thumbnail Image

世界各国でDeepSeekを警戒する動き。個人情報の扱いには要注意です

2025-02-03
gizmodo.jp
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (DeepSeek's AI model) and focuses on concerns about its data handling practices, which relate to personal data privacy and security. The ongoing investigations and regulatory actions indicate a credible risk that the AI system's use or malfunction could lead to violations of privacy rights and data protection laws, constituting potential harm. Since no confirmed harm or breach has been reported yet, but the risk is credible and investigations are active, this situation fits the definition of an AI Hazard rather than an AI Incident. The article is not merely general AI news or a response update but centers on the plausible future harm from the AI system's data practices.
Thumbnail Image

業界注目の中国産AI「DeepSeek」を数百の企業が使用禁止、データ漏洩リスクが理由

2025-02-01
GIGAZINE
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system as it provides AI-powered services and models. The event involves the use and malfunction (data breach) of this AI system leading to direct harm through data leakage and privacy violations. The blocking of DeepSeek by hundreds of companies is a response to these harms. The data breach exposing millions of chat histories confirms realized harm. Therefore, this qualifies as an AI Incident due to direct harm caused by the AI system's use and malfunction.
Thumbnail Image

DeepSeekの衝撃と波紋 記者とエンジニアが徹底解説

2025-02-04
日本経済新聞
Why's our monitor labelling this an incident or hazard?
The event involves the development and release of a generative AI system, with allegations of improper data acquisition and information protection issues. These concerns relate to potential violations of intellectual property rights and data privacy, which are harms under the AI Incident definition. However, the article does not state that these harms have already occurred or that the AI system has directly or indirectly caused harm yet. Therefore, this situation represents a plausible risk of harm rather than a realized incident. Hence, it is best classified as an AI Hazard.
Thumbnail Image

DeepSeek登場、米国は中国のAIの進歩を止めることはできるのか -- 香港メディア (2025年2月5日) - エキサイトニュース

2025-02-04
Excite
Why's our monitor labelling this an incident or hazard?
The article centers on the strategic competition and regulatory considerations related to AI development between the US and China, particularly focusing on export controls and national security concerns. While it involves an AI system (DeepSeek) and discusses potential risks and policy responses, there is no mention of actual harm or incidents caused by the AI system. The concerns are about plausible future risks and regulatory challenges rather than realized harms. Therefore, this event fits the definition of Complementary Information as it provides context and updates on AI ecosystem developments and governance responses without reporting a new AI Incident or AI Hazard.
Thumbnail Image

「DeepSeekショック」とは何だったのか? 2025年、AI開発の最新事情を解説

2025-02-04
ITmedia
Why's our monitor labelling this an incident or hazard?
The article focuses on the economic and technological impact of DeepSeek's AI model announcement, which caused a stock market shock due to changed expectations about AI development costs and GPU demand. There is no mention of any harm to people, property, rights, or critical infrastructure, nor any indication that the AI system's use or malfunction caused or could cause such harm. The event is a significant development in the AI ecosystem but does not meet the criteria for an AI Incident or AI Hazard. It is best classified as Complementary Information, as it provides important context and understanding of AI development trends and market responses without describing harm or plausible harm.
Thumbnail Image

中国「DeepSeek」生成AIデータの取り扱いめぐり欧米から懸念の声中国側は反論「違法なデータ収集要求しない」(2025年2月1日)|BIGLOBEニュース

2025-02-01
BIGLOBEニュース
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (DeepSeek's generative AI) and discusses concerns about data privacy and potential unauthorized data sharing, which could plausibly lead to violations of privacy rights or other harms. Since no actual harm or incident has been reported yet, and the focus is on concerns and investigations, this fits the definition of an AI Hazard rather than an AI Incident. It is not merely complementary information because the concerns themselves imply a credible risk of harm, and it is not unrelated as it directly involves an AI system and potential harm.
Thumbnail Image

多くの国がDeepSeekに使用制限、日本は... -- 中国メディア(2025年2月3日)|BIGLOBEニュース

2025-02-03
BIGLOBEニュース
Why's our monitor labelling this an incident or hazard?
The article describes regulatory and governmental actions limiting or scrutinizing the use of an AI system (DeepSeek) due to concerns about personal data usage and national security. While no actual harm is reported, the described restrictions and investigations reflect a credible risk that the AI system could lead to harms such as violations of privacy rights or security threats. Therefore, this situation fits the definition of an AI Hazard, as the development and use of the AI system could plausibly lead to an AI Incident, but no incident has yet been reported.
Thumbnail Image

DeepSeekの脱獄に成功、システムプロンプトを抽出

2025-02-03
マイナビニュース
Why's our monitor labelling this an incident or hazard?
The event involves the use and manipulation of an AI system (DeepSeek's generative AI) and the extraction of its system prompt, which is a security breach or vulnerability exploitation. However, the article does not report any direct or indirect harm resulting from this extraction, such as injury, rights violations, or disruption. Therefore, it does not qualify as an AI Incident. It also does not describe a plausible future harm scenario beyond the current event, so it is not an AI Hazard. The event is primarily a report on a security research finding related to AI system vulnerabilities, which enhances understanding of AI system security but does not itself describe harm or potential harm. Hence, it is best classified as Complementary Information.
Thumbnail Image

アメリカの上院議員が「DeepSeek禁止法」を計画、中国とのAIの輸出入を規制する新法が提出される

2025-02-04
GIGAZINE
Why's our monitor labelling this an incident or hazard?
The article involves AI systems explicitly (DeepSeek's AI app) and discusses legislative efforts to restrict AI technology transfer due to concerns about data security, intellectual property, and geopolitical competition. However, no direct or indirect harm from the AI system's development, use, or malfunction has occurred yet. The focus is on a proposed law that could prevent future harms by restricting AI technology flows. This fits the definition of an AI Hazard, as the event plausibly relates to potential future harms from AI system proliferation and misuse, but no incident has materialized. It is not Complementary Information because it is not an update or response to a past incident but a new legislative proposal. It is not Unrelated because AI systems and their risks are central to the event.
Thumbnail Image

DeepSeekのAIモデルをジェイルブレイクしてシステムプロンプトを抽出することに成功したという報告

2025-02-04
GIGAZINE
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek V3) and its development and use, specifically a security breach (jailbreak) that extracted the system prompt. This is a clear AI-related event involving the AI system's malfunction or misuse. However, the article does not report any actual harm caused by this jailbreak, such as injury, rights violations, or disruption. The researchers responsibly notified DeepSeek, and the vulnerability was fixed, indicating mitigation. The event thus informs about AI security risks and remediation efforts, fitting the definition of Complementary Information rather than an AI Incident or AI Hazard. There is no direct or indirect harm realized, nor is there a credible ongoing risk described that would justify classifying it as an AI Hazard.
Thumbnail Image

「DeepSeekが禁輸対象のNVIDIA製高性能GPUをシンガポール経由で輸入した可能性」についてアメリカが捜査中

2025-02-03
GIGAZINE
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (DeepSeek's AI models) and the use of high-performance AI hardware (NVIDIA GPUs) critical for AI development. The U.S. investigation focuses on the possible illegal procurement of these GPUs circumventing export controls, which is a development and use issue related to AI systems. Although no direct harm (such as injury, rights violations, or operational disruption) has been reported, the potential for significant harm exists if restricted AI technology is used without compliance, including national security concerns and undermining regulatory frameworks. This fits the definition of an AI Hazard, as the event plausibly could lead to an AI Incident if the hardware use enables harmful AI capabilities or breaches legal obligations. There is no indication of realized harm yet, so it is not an AI Incident. It is not merely complementary information because the investigation itself highlights a credible risk. It is not unrelated because AI systems and AI hardware are central to the event.
Thumbnail Image

「DeepSeekショック」で欧米は大混乱も...開発に参加した「95年生まれのAI天才少女」は「約2億円での引き抜き」報道 | デイリー新潮

2025-02-04
デイリー新潮
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (DeepSeek-R1) whose release has caused market disruption (economic harm) and is associated with cyberattacks and data theft allegations (potential violations of intellectual property rights and data security). These harms are directly or indirectly linked to the AI system's development and use. The economic impact on NVIDIA's stock and the data security concerns indicate realized harms rather than just potential risks. Hence, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Microsoft、「Azure AI Foundry」や「GitHub」のモデルカタログにて「DeepSeek R1」の提供を開始

2025-02-04
CodeZine
Why's our monitor labelling this an incident or hazard?
The article describes the release of an AI model and the associated safety evaluations and tools to mitigate risks. There is no mention of any harm caused or potential harm that could plausibly lead to an AI Incident or Hazard. The focus is on the availability of the AI model and the safety frameworks in place, which is informative and contextual but does not describe an incident or hazard. Therefore, this is Complementary Information as it provides context and updates about AI system deployment and governance without reporting any harm or risk event.
Thumbnail Image

KI-Startup DeepSeek unter Beschuss: Italien sperrt, USA hackt

2025-01-31
Telepolis
Why's our monitor labelling this an incident or hazard?
The article describes an AI system (DeepSeek's AI assistant) whose access was blocked by a data protection authority due to insufficient information on personal data usage, indicating concerns about potential violations of privacy rights (a form of human rights). However, no explicit harm to individuals or groups has been reported yet, only regulatory action and investigation. The cyberattack caused temporary service disruption but no reported injury, rights violation, or other harm as defined. The event mainly reports on regulatory enforcement and a cyberattack response, which fits the definition of Complementary Information, as it updates on societal and governance responses to AI-related issues. There is no direct or indirect harm caused by the AI system's development, use, or malfunction reported, nor a clear plausible future harm beyond regulatory concerns. Therefore, the event is not an AI Incident or AI Hazard but Complementary Information.
Thumbnail Image

Italien fordert von Deepseek Informationen zum Datenschutz

2025-01-29
Nau
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Deepseek's AI chatbot) and concerns its use and data handling practices. However, the article describes an ongoing regulatory investigation and information request, not a realized harm or incident. There is no direct or indirect harm reported yet, only a potential risk related to data privacy and censorship. Therefore, this qualifies as Complementary Information, as it provides context on governance and oversight responses to AI use rather than reporting an AI Incident or Hazard.
Thumbnail Image

Italiens Datenschutzbehörde sperrt DeepSeek

2025-01-30
nachrichten.at
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek) whose use has led to concerns about data privacy and potential violations of data protection regulations, prompting regulatory action. Although no direct physical harm or injury is reported, the blocking of the app and investigation relate to violations of legal obligations protecting user data and rights, which fits the definition of an AI Incident under violations of applicable law intended to protect fundamental rights. The censorship of information by the AI also suggests harm to the right to access information, a human rights concern. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Italien fordert von DeepSeek Informationen zum Datenschutz

2025-01-29
Salzburger Nachrichten
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (DeepSeek's AI) and its data practices, which are under scrutiny by a regulatory authority. However, there is no indication that the AI system has directly or indirectly caused harm or violated rights yet. The event is about a regulatory investigation and information request, which is a governance response to potential risks but does not itself constitute an AI Incident or AI Hazard. Therefore, it fits the category of Complementary Information, as it provides context on societal and governance responses to AI use and data privacy concerns.
Thumbnail Image

KI-Anwendung von DeepSeek in Italien vorerst nicht mehr verfügbar

2025-01-29
Salzburger Nachrichten
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (DeepSeek) whose data practices and censorship raise concerns about potential violations of privacy rights and freedom of information, which are human rights. The Italian authority's demand for information and temporary unavailability of the app in Italy indicate regulatory scrutiny due to plausible risks. Since no actual harm or incident has been reported yet, but there is a credible potential for harm, this qualifies as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

AI: il Garante della Privacy blocca DeepSeek - Il Sole 24 ORE

2025-01-31
Il Sole 24 ORE
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek chatbot) whose use has raised privacy concerns leading to regulatory intervention. Although no direct harm such as injury or rights violation is explicitly reported, the blocking action indicates a potential or ongoing violation of data protection rights, which falls under human rights and legal obligations. Since the article describes an active regulatory measure to prevent harm related to data privacy, this constitutes an AI Incident due to the direct impact on users' rights and data protection.
Thumbnail Image

IA, DeepSeek non ha vita facile. Le ragioni del Garante

2025-01-31
The Watcher Post
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (DeepSeek chatbot) whose data processing practices are under scrutiny for compliance with GDPR and data protection laws. The regulatory authority's urgent intervention to limit data processing and the app's removal from stores indicate concerns about potential harm, including privacy violations and misinformation risks. No direct harm is reported as having occurred, but the credible risk and regulatory response show plausible future harm. Hence, this is an AI Hazard rather than an AI Incident. The geopolitical context and prior similar cases (e.g., ChatGPT) support the assessment of potential harm rather than realized harm.
Thumbnail Image

Il Garante per la Privacy blocca l'IA cinese DeepSeek

2025-01-30
la Repubblica
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek) whose data processing practices have raised regulatory concerns leading to an urgent restriction on its operation in Italy. The involvement of the AI system's use in handling personal data without adequate compliance with privacy laws constitutes a potential violation of legal obligations protecting fundamental rights. Although no direct harm is explicitly reported, the regulatory intervention indicates a significant risk of harm to users' privacy rights. This situation fits the definition of an AI Incident because the AI system's use has led to a breach or potential breach of applicable law intended to protect fundamental rights (data privacy).
Thumbnail Image

*** DeepSeek: Garante Privacy dispone blocco e apre istruttoria - Il Sole 24 ORE

2025-01-30
Il Sole 24 ORE
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (DeepSeek chatbot service) and concerns the use of personal data of Italian users, with regulatory action taken to limit data processing and an investigation initiated. Although no direct harm is explicitly reported, the regulatory intervention indicates potential or ongoing violations of data protection rights, which fall under violations of applicable law protecting fundamental rights. This constitutes an AI Incident because the AI system's use has led to regulatory action due to data protection concerns, implying harm or risk to users' rights has materialized or is ongoing.
Thumbnail Image

Italia bloquea DeepSeek por falta de información; el país pidió explicar qué tipo de información usa para entrenarlo

2025-01-30
El Universal
Why's our monitor labelling this an incident or hazard?
An AI system (DeepSeek) is explicitly involved, and the event concerns its use and data practices. Although no direct harm is reported, the blocking action is a preventive measure due to concerns about potential violations of data protection laws and user privacy, which relate to human rights and legal obligations. Since no actual harm has been reported but there is a credible risk of harm due to insufficient transparency and possible non-compliance with data protection regulations, this event qualifies as an AI Hazard rather than an AI Incident. It is not merely complementary information because the blocking is a concrete regulatory action based on potential risks, and it is not unrelated as it directly involves an AI system and its governance.
Thumbnail Image

Il Garante privacy blocca DeepSeek con effetto immediato per tutelare gli italiani

2025-01-30
Hardware Upgrade
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek chatbot) whose use involves processing personal data of Italian users. The regulatory authority's intervention is due to concerns about compliance with data protection laws, which relate to violations of fundamental rights (privacy). Although no direct harm is reported yet, the situation plausibly could lead to violations of rights if the data processing continues unchecked. Therefore, this event represents an AI Hazard, as the AI system's use could plausibly lead to a breach of legal obligations protecting fundamental rights, but no realized harm is described yet.
Thumbnail Image

Garante Privacy indaga su DeepSeek: richieste informazioni su trattamento dati personali

2025-01-28
Quotidiano Nazionale
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (DeepSeek chatbot) and concerns about its data processing practices, which could potentially lead to violations of data protection laws and harm to individuals' privacy rights. However, no actual harm or violation has been reported yet; the event is about a regulatory inquiry and information request to assess compliance and risks. Therefore, this constitutes an AI Hazard, as the AI system's use could plausibly lead to harm if data protection is inadequate, but no incident has occurred yet.
Thumbnail Image

Il Garante per la Privacy blocca DeepSeek in Italia: "Non può trattare i dati dei cittadini"

2025-01-30
la Repubblica
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system (a chatbot) processing personal data of Italian users. The Garante's intervention to block data processing due to non-compliance with data protection regulations indicates a violation of legal obligations protecting fundamental rights. This constitutes an AI Incident because the AI system's use has directly led to a breach of applicable law (data protection law), harming users' rights. Therefore, the event qualifies as an AI Incident.
Thumbnail Image

Sospesa deepseek, ma l'app conitnua a funzionare

2025-01-31
Famiglia Cristiana
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (DeepSeek chatbot) that processes user data and conversations. The Italian privacy authority's urgent limitation and investigation stem from concerns about insufficient data protection measures, implying a credible risk of violation of fundamental rights (data privacy). No direct harm is reported yet, but the regulatory action indicates plausible future harm. The app's continued operation for existing users despite removal from stores and claims of a hacker attack further underline potential risks. Hence, this is best classified as an AI Hazard rather than an Incident, Complementary Information, or Unrelated event.
Thumbnail Image

Garante privacy chiede info a deepseek, "possibile rischio per i dati di milioni di persone in Italia"

2025-01-28
Stato Quotidiano
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (the DeepSeek AI chatbot) and concerns about its data handling and training practices. However, it does not report any realized harm or incident but rather a regulatory inquiry due to potential risks. This fits the definition of Complementary Information, as it provides context and updates on governance and oversight related to AI systems and their societal impact, without describing a specific AI Incident or AI Hazard.
Thumbnail Image

DeepSeek resiste, l'IA cinese non rispetta le indicazioni del Garante: è ancora online

2025-01-31
Stile e Trend Fanpage
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI chatbot system whose use involves processing personal data of Italian users. The Garante della Privacy's intervention indicates that the AI system's use has led to violations of applicable privacy laws, which are legal obligations protecting fundamental rights. The continued operation of the web version despite the order implies ongoing harm or breach. Therefore, this event qualifies as an AI Incident due to the AI system's use directly leading to a breach of legal obligations (privacy rights).
Thumbnail Image

Italia bloquea la app de DeepSeek y el Congreso de EEUU la veta a sus trabajadores

2025-01-31
ElDiario.es
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI chatbot application, thus an AI system. The Italian data protection authority blocked its use due to insufficient information about personal data processing, indicating concerns about potential violations of privacy rights (a form of harm under human rights and legal obligations). The US Congress also restricted its use due to risks of malware distribution linked to DeepSeek, indicating plausible cybersecurity harm. No direct harm is reported yet, but the regulatory actions and investigations show credible risks of harm. Hence, this event is best classified as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

ALTROCONSUMO * DEEPSEEK: "IL GARANTE BLOCCA LA CHATBOT AI CINESE E APRE ISTRUTTORIA, MASSIMA PROTEZIONE PER I DATI DEGLI UTENTI IN TUTTA EUROPA" - Agenzia giornalistica Opinione. Notizie nazionali e dal Trentino Alto Adige

2025-01-31
Agenzia giornalistica Opinione. Notizie nazionali e dal Trentino Alto Adige
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (the DeepSeek chatbot) whose use has led to violations of the GDPR, a legal framework protecting fundamental rights related to personal data. The blocking and investigation by the data protection authority confirm that harm in the form of rights violations has occurred or is ongoing. The AI system's processing of personal data without adequate safeguards, including transfer to China without guarantees and lack of transparency, constitutes a breach of obligations under applicable law. Therefore, this qualifies as an AI Incident under the definition of violations of human rights or breach of legal obligations protecting fundamental rights.
Thumbnail Image

多国对DeepSeek使用设限

2025-01-31
news.bjd.com.cn
Why's our monitor labelling this an incident or hazard?
The article details regulatory scrutiny and government warnings about DeepSeek's AI system, focusing on data privacy and national security concerns. These actions reflect potential risks but do not report any realized harm or incidents caused by the AI system. Therefore, this situation fits the definition of Complementary Information, as it provides updates on governance responses and investigations related to AI without describing a specific AI Incident or AI Hazard.
Thumbnail Image

〖兲朝浮世绘〗DeepSeek 剧情反转太快 真相打肿中共的脸

2025-01-31
botanwang.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (DeepSeek's chatbot and OpenAI's models) and their development and use, including possible unauthorized data extraction for training. However, no actual harm (such as injury, rights violations, or disruption) has been reported or can be reasonably inferred. The article mainly provides information about the investigation, the AI system's performance, and political context, which fits the definition of Complementary Information. There is no indication that the AI system has caused or is causing harm, nor that it plausibly could lead to harm imminently. Hence, it is not an AI Incident or AI Hazard.
Thumbnail Image

DeepSeek: Italia decidió bloquear la aplicación de inteligencia artificial china

2025-01-31
Todo Noticias
Why's our monitor labelling this an incident or hazard?
The article describes an AI system (DeepSeek) whose use is currently restricted by a regulatory authority due to concerns about data privacy and insufficient transparency. No actual harm has been reported yet, but the potential for violations of personal data rights and legal obligations is credible and plausible. The event is about the potential risk and regulatory response rather than a realized harm, fitting the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Deepseek: Italia decidió bloquear la aplicación de inteligencia artificial china

2025-01-31
esdelatino.com
Why's our monitor labelling this an incident or hazard?
An AI system (Deepseek) is explicitly involved. The Italian data protection authority's action stems from concerns about the AI system's use and data handling practices, which relate to legal compliance and potential violations of data protection rights. Although no direct harm is reported, the blocking and investigation indicate a regulatory response to potential violations of fundamental rights (data privacy). Since the event concerns the use of an AI system and possible breaches of legal obligations protecting personal data, it qualifies as an AI Incident due to violations of applicable law intended to protect fundamental rights. The harm is indirect but materialized in the form of regulatory intervention and user access limitation.
Thumbnail Image

Il Garante per la Privacy blocca DeepSeek in Italia

2025-01-31
Italian Tech
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (DeepSeek) that processes personal data, raising concerns about GDPR compliance and data privacy rights (a human rights violation). The Garante's blocking order and ongoing investigation indicate regulatory action to address potential legal violations. However, there is no report of actual harm or breach having occurred yet, only the potential for such harm if data processing continues without compliance. The event is primarily about the regulatory response and investigation, fitting the definition of Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

Garante Privacy blocca DeepSeek, ma l'AI funziona lo stesso

2025-01-31
tecnologia.libero.it
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system as it is described as an AI model providing responses and personalized outputs based on user data. The Garante's intervention is due to violations of GDPR, which is a breach of legal obligations protecting fundamental rights, specifically privacy rights. The blocking of data processing is a direct response to these violations, indicating realized harm to users' data protection rights. The censorship and biased responses also imply harm to informational rights and freedom of expression. Therefore, this event qualifies as an AI Incident because the AI system's use has directly led to violations of rights and harms related to privacy and information access.
Thumbnail Image

Italia bloquea la aplicación china DeepSeek por falta de información

2025-01-31
naiz:
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (DeepSeek) whose use has raised concerns about data privacy and legal compliance. Italy's data protection authority blocked the app due to insufficient transparency about data collection, usage, and storage, which could plausibly lead to violations of user privacy rights (a form of harm under human rights and legal obligations). No actual harm or incident is reported, only preventive regulatory action and investigation. Hence, this is an AI Hazard, reflecting a credible risk of harm from the AI system's use if unaddressed.
Thumbnail Image

DeepSeek non e' piu' disponibile in Italia | TRT Italiano

2025-01-31
trt.net.tr
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system (a chatbot) whose use involves processing personal data. The Italian Data Protection Authority has blocked its access and launched an investigation due to concerns about inadequate information on data collection and processing, implying potential violations of data protection rights. No actual harm is reported yet, but the regulatory action indicates a credible risk of harm to users' privacy and rights if the system were to continue operating without compliance. Hence, this is an AI Hazard because the AI system's use could plausibly lead to violations of fundamental rights (data privacy) and related harms, but no direct harm has been confirmed at this stage.
Thumbnail Image

Intelligenza artificiale, garante privacy blocca Deepseek

2025-01-31
Stato Quotidiano
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek chatbot) whose data processing practices have been limited by a privacy authority due to non-compliance with legal frameworks protecting user data. While no direct harm is reported, the regulatory action and investigation indicate concerns about potential violations of rights (privacy rights) linked to the AI system's use. This fits the definition of Complementary Information, as it reports governance and societal responses to AI use rather than a realized harm or a plausible future harm incident.
Thumbnail Image

日本、法国、意大利等,表态了!还有库克、扎克伯格......

2025-01-31
news.ycwb.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions DeepSeek as an AI system and details various countries' regulatory and governmental responses, including investigations and usage restrictions, reflecting concerns about privacy and data protection risks. However, no actual harm or incident caused by DeepSeek is reported. The involvement of AI is clear, but the focus is on responses and assessments rather than harm or plausible harm. This fits the definition of Complementary Information, which includes updates on governance, regulatory inquiries, and corporate reactions to AI developments without new incidents or hazards.
Thumbnail Image

Il Garante della Privacy mette al bando DeepSeek: ecco i motivi

2025-02-01
lentepubblica.it
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system (a large language model chatbot) whose development and use have directly led to harms including violations of data privacy and security, as evidenced by the Italian authority's ban and ongoing investigations. The system collects and processes extensive personal data without adequate safeguards, shares data with Chinese entities under government control, and has been found to generate harmful and discriminatory content, posing risks to users and communities. These factors meet the criteria for an AI Incident due to realized harm to privacy rights and potential broader harms.
Thumbnail Image

El regulador italiano de datos bloquea la aplicación de IA de la china DeepSeek

2025-01-31
Yahoo Finance
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek's chatbot) whose use has raised concerns about violations of data privacy rights, a fundamental human right. The regulator's intervention follows inadequate compliance with privacy requirements, indicating a breach or potential breach of legal obligations protecting personal data. Since the AI system's use has directly led to regulatory action due to privacy concerns, this constitutes an AI Incident under the framework, specifically a violation of human rights and legal obligations related to data protection.
Thumbnail Image

Italia bloquea DeepSeek: acusan no haber recibido información de los datos que recoge la IA china

2025-01-31
BioBioChile - La Red de Prensa Más Grande de Chile
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek) whose use and data handling practices have led to regulatory action by the Italian data protection authority. The authority's urgent blocking and investigation are responses to insufficient transparency about personal data collection, usage, and legal basis, which implicates potential violations of data protection rights. This fits the definition of an AI Incident because the AI system's use has directly led to a breach or potential breach of applicable law protecting fundamental rights (data privacy). The harm is regulatory and legal in nature, aiming to prevent further violations and protect user data. The event is not merely a potential risk (hazard) or a complementary update but a concrete incident involving harm and official intervention.
Thumbnail Image

台要求公务机关禁用DeepSeek 基于资安考量引发争议

2025-02-01
news.china.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek's AI model) and concerns about its use in sensitive government contexts. The advisory issued by Taiwan's Digital Development Department is based on potential risks of data leakage and cross-border transmission, which could plausibly lead to harm such as violations of privacy or information security breaches. However, no actual incident of harm has been reported. The public debate and criticism reflect differing views on the plausibility of these risks. Therefore, this event fits the definition of an AI Hazard, as it concerns a plausible future risk stemming from the use of an AI system, but no realized harm has been documented.
Thumbnail Image

DeepSeek scompare dagli store di Apple e Google in Italia, sito a rilento

2025-01-29
la Repubblica
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system providing a bot service that uses AI technology similar to ChatGPT. The privacy authority's investigation into unauthorized data use and data protection compliance directly relates to potential violations of fundamental rights (privacy and data protection). Although no explicit harm is reported as having occurred yet, the investigation and regulatory scrutiny indicate a plausible risk of harm to individuals' rights. However, since the article focuses on the ongoing investigation and regulatory actions rather than a realized harm, this event is best classified as Complementary Information, providing context and updates on AI governance and responses to potential AI-related harms.
Thumbnail Image

DeepSeek应用在意大利下架 隐私问题待解

2025-01-29
news.china.com
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI application that processes personal data, and the privacy regulator's investigation indicates potential legal and rights-related concerns. However, the article does not report any realized harm or incident, only the regulatory inquiry and app removal as a precaution. Therefore, this is best classified as an AI Hazard, reflecting the plausible risk of privacy-related harm stemming from the AI system's use and data handling practices.
Thumbnail Image

DeepSeek è sicuro? Il Garante Privacy indaga

2025-01-30
tecnologia.libero.it
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek chatbot) whose use has led to concerns about violations of data protection laws and fundamental rights, including privacy and transparency. The Italian Data Protection Authority's investigation and consumer associations' formal complaints indicate that harms to rights have occurred or are occurring. The AI system's data collection, storage, and processing practices are central to these harms. Hence, this is an AI Incident as per the definition involving violations of human rights and legal obligations due to the AI system's use.
Thumbnail Image

Il Garante della Privacy chiede informazioni a DeepSeek: i dati degli italiani a rischio?

2025-01-28
TGLA7
Why's our monitor labelling this an incident or hazard?
An AI system (the DeepSeek chatbot) is involved, as it is an AI service collecting and processing personal data. The event stems from the use and development of this AI system, specifically regarding data handling practices. Although no direct harm has occurred, the authority's concern about potential high risk to personal data indicates a plausible future harm scenario. Therefore, this qualifies as an AI Hazard, since the event concerns a credible risk of violation of data protection rights due to the AI system's operation, but no incident (actual harm) has been reported yet.
Thumbnail Image

Deepseek, il Garante chiede informazioni alle società cinesi: "A rischio i dati di milioni di persone in Italia"

2025-01-28
La Stampa
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (DeepSeek chatbot) and concerns about data privacy violations, which fall under violations of human rights and legal obligations. However, the article does not state that harm has already occurred or that the AI system has directly or indirectly caused harm. Instead, it reports on a regulatory authority's request for information and investigation prompted by consumer complaints, indicating a governance response to potential risks. This fits the definition of Complementary Information, as it enhances understanding of AI ecosystem governance and responses without describing a new AI Incident or AI Hazard.
Thumbnail Image

Deepseek, Garante privacy chiede informazioni per 'dati a rischio'

2025-01-28
tg24.sky.it
Why's our monitor labelling this an incident or hazard?
The event concerns the use of an AI system (the DeepSeek chatbot) and the potential risk to personal data privacy. However, there is no indication that harm has yet occurred or that the AI system has malfunctioned or been misused to cause harm. The request for information is a precautionary regulatory action to assess potential risks. Therefore, this is best classified as Complementary Information, as it provides context and updates on governance and oversight related to AI systems and their data practices, without reporting a realized incident or imminent hazard.
Thumbnail Image

*** DeepSeek: Garante Privacy chiede informazioni, a rischio dati milioni italiani - Il Sole 24 ORE

2025-01-28
Il Sole 24 ORE
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (the DeepSeek AI chatbot) and concerns about its data processing practices. The authority's inquiry is prompted by potential risks to personal data privacy, which could lead to violations of data protection laws and rights. However, no realized harm or incident is described; the event is about assessing potential risks and gathering information. Therefore, this qualifies as an AI Hazard, as the AI system's use could plausibly lead to harm (violation of data protection rights) if not properly managed.
Thumbnail Image

DeepSeek, si muove il Garante per la privacy: "A rischio i dati di milioni di italiani"

2025-01-28
Il Sole 24 ORE
Why's our monitor labelling this an incident or hazard?
An AI system (DeepSeek chatbot) is explicitly involved, and the event concerns its use and data processing practices. The investigation and formal complaints arise from potential violations of data protection laws (GDPR), which protect fundamental rights. Although no confirmed harm is reported yet, the authority's concern about 'high risk' to millions of people's data and the formal complaints indicate a plausible risk of harm to rights. Since the harm is not confirmed but plausibly could occur due to the AI system's data handling, this event qualifies as an AI Hazard rather than an AI Incident. It is not merely complementary information because the focus is on the potential risk and regulatory action, not on a resolved or past incident.
Thumbnail Image

Il Garante per la privacy accende un faro su Deepseek e apre una indagine

2025-01-28
Il Sole 24 ORE
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Deepseek chatbot) and concerns its development and use, specifically regarding data collection and privacy compliance. Although no direct harm has been reported, the investigation is prompted by concerns that the AI system's operation could plausibly lead to violations of data protection laws and harm to individuals' privacy rights. This fits the definition of an AI Hazard, as the event concerns a credible risk of harm due to the AI system's potential non-compliance and data handling practices, but no realized harm is described yet.
Thumbnail Image

A tempo record DeepSeek ha già problemi di privacy: "Rischio per i dati di milioni di persone"

2025-01-28
Stile e Trend Fanpage
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (DeepSeek) and concerns about its data privacy practices, which could plausibly lead to violations of data protection rights (a form of human rights violation). However, no actual harm or incident has been reported yet; the authority is seeking information to assess potential risks. Therefore, this situation constitutes an AI Hazard, as the development and use of the AI system could plausibly lead to an AI Incident involving privacy violations, but no direct or indirect harm has yet occurred.
Thumbnail Image

Il Garante privacy chiede informazioni a DeepSeek: dati di milioni di italiani a rischio

2025-01-29
Hardware Upgrade
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (DeepSeek chatbot) that processes personal data of millions of Italians. The Italian privacy authority's inquiry is motivated by concerns over potential risks to data privacy and legal compliance. However, no actual data breach, misuse, or harm has been reported yet. The event is a regulatory investigation into possible risks, which could plausibly lead to an AI Incident if violations or harms are found. Therefore, this qualifies as an AI Hazard, as it concerns a plausible future risk of harm related to the AI system's data processing practices, but no realized harm is described.
Thumbnail Image

L'Italia chiede a DeepSeek chiarimenti sull'uso dei dati degli utenti

2025-01-29
Punto Informatico
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (DeepSeek's AI model) and concerns about its data handling practices, which could potentially lead to violations of privacy rights and legal obligations under GDPR. However, the event is about a regulatory investigation and information request, with no direct or indirect harm reported as having occurred. Therefore, it is a governance and societal response to potential AI-related issues, enhancing understanding and oversight rather than describing an incident or hazard. This fits the definition of Complementary Information.
Thumbnail Image

Il Garante della Privacy bussa a DeepSeek: le richieste alla startup cinese dell'AI. In arrivo un blocco in Italia? - StartupItalia

2025-01-29
StartupItalia
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (DeepSeek's chatbot) and concerns its development and use, specifically regarding data privacy and legal compliance. However, the event currently describes an ongoing investigation and information request by a regulatory authority, with no confirmed harm or violation yet. The potential for harm exists if non-compliance is found, but at this stage, it is a plausible risk rather than a realized incident. Therefore, this qualifies as Complementary Information, as it provides important context and updates on governance and regulatory responses related to AI, without reporting a confirmed AI Incident or AI Hazard.
Thumbnail Image

Con l'AI cinese DeepSeek "milioni di dati a rischio in Italia"

2025-01-29
telefonino.net
Why's our monitor labelling this an incident or hazard?
An AI system (DeepSeek) is involved, specifically an AI service handling large-scale personal data. The event stems from the use and development of this AI system, with regulatory authorities investigating potential violations of data protection laws and risks to user privacy. Although no direct harm has been reported, the situation plausibly could lead to an AI Incident if data misuse or breaches occur. Therefore, this qualifies as an AI Hazard because it highlights a credible risk of harm related to the AI system's data practices, but no realized harm is described yet.
Thumbnail Image

Il Garante della privacy blocca Deepseek in Italia

2025-01-30
Today
Why's our monitor labelling this an incident or hazard?
An AI system (DeepSeek chatbot) is explicitly involved, and the event concerns its use and data processing practices. The Italian authority's decision to block the service is a response to potential violations of data protection laws, which constitute a breach of fundamental rights (privacy). Although no specific harm is reported as having occurred, the regulatory action is a direct response to risks of harm to users' privacy and rights. Since the harm is not yet realized but the risk is credible and immediate, this qualifies as an AI Hazard rather than an AI Incident. The article focuses on the regulatory intervention and potential risks rather than reporting an actual data breach or harm.
Thumbnail Image

意机构要求DeepSeek提供数据保护信息 GDPR合规性受质疑

2025-01-29
news.china.com
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI company whose large language model processes personal data. The regulatory complaint and information request relate to whether DeepSeek's data handling complies with GDPR, implicating potential violations of data protection and privacy rights (a breach of obligations under applicable law protecting fundamental rights). Although no direct harm is reported yet, the concerns about data protection, especially for minors, and the risk to millions of individuals' data indicate a plausible risk of harm. However, since the article focuses on the regulatory inquiry and potential compliance issues without reporting actual realized harm, this event is best classified as Complementary Information, providing important context on governance and oversight responses to AI systems.
Thumbnail Image

AI, Garante Privacy indaga su DeepSeek: dati degli italiani in Cina?

2025-01-29
Key4biz
Why's our monitor labelling this an incident or hazard?
An AI system (DeepSeek chatbot) is explicitly involved, and the event concerns its use and data handling practices. However, no actual harm or violation has been reported yet; the authority is conducting a preventive investigation to understand potential risks. This fits the definition of Complementary Information, as it provides context on governance and oversight responses to AI-related privacy concerns without describing a realized AI Incident or a plausible AI Hazard at this stage.
Thumbnail Image

DeepSeek come ChatGPT: il Garante della Privacy italiano blocca l'AI cinese - StartupItalia

2025-01-30
StartupItalia
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek chatbot) whose use has led to regulatory intervention due to data privacy concerns, which relate to violations of legal obligations protecting fundamental rights (privacy rights). Although no direct physical harm is reported, the misuse or inadequate protection of personal data constitutes a violation of rights under applicable law, qualifying as an AI Incident. The regulatory action and app removal indicate realized harm or risk to users' data privacy, not just a potential hazard or complementary information.
Thumbnail Image

Garante della Privacy blocca DeepSeek in Italia: "Dati degli utenti a rischio, insufficienti i chiarimenti delle società cinesi"

2025-01-30
ilgiornaleditalia.it
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system (a generative AI chatbot) whose use and data processing practices have been scrutinized by the Italian privacy authority. The authority's intervention followed insufficient responses from the company about data protection, and a security flaw was discovered exposing sensitive user data. This constitutes a violation of fundamental rights (privacy) and a breach of legal obligations, fulfilling the criteria for an AI Incident. The harm is realized as user data was exposed, and the authority's urgent measures indicate direct involvement of the AI system's use and malfunction in causing harm.
Thumbnail Image

DeepSeek bloccato dal Garante della Privacy: "La società ha dichiarato che non opera in Italia"

2025-01-30
Stile e Trend Fanpage
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI chatbot system processing personal data of Italian users. The Italian Data Protection Authority has blocked its operation due to insufficient compliance with data protection regulations, indicating a credible risk of harm to users' privacy rights. No explicit harm has been reported yet, but the regulatory action reflects plausible future harm from the AI system's use. Hence, this is an AI Hazard rather than an Incident or Complementary Information. The event is not unrelated because it directly concerns an AI system and its potential to cause harm through data misuse.
Thumbnail Image

Italia confirma el bloqueo de DeepSeek por falta de información y abre una investigación

2025-01-30
Diario de Sevilla
Why's our monitor labelling this an incident or hazard?
An AI system (DeepSeek chatbot) is explicitly involved, and the event concerns its use and data processing practices. The blocking and investigation are responses to insufficient transparency about data usage, which could plausibly lead to violations of privacy rights (a form of harm under human rights and legal obligations). Since no actual harm has been reported but there is a credible risk of harm to users' data privacy, this qualifies as an AI Hazard rather than an AI Incident. The article focuses on regulatory actions and investigations to prevent potential harm, not on realized harm or incidents caused by the AI system.
Thumbnail Image

Il Garante della Privacy blocca DeepSeek: l'app cinese sotto inchiesta

2025-01-30
LiberoReporter
Why's our monitor labelling this an incident or hazard?
The AI system (DeepSeek) is clearly involved as it processes human conversations using AI. The blocking by the privacy authority is a response to insufficient data protection measures and non-compliance with legal frameworks, which could plausibly lead to violations of fundamental rights (privacy). Since no actual harm is reported but a credible risk exists, this qualifies as an AI Hazard rather than an AI Incident. The event is not merely complementary information because it reports a concrete regulatory action based on potential harm, nor is it unrelated as it directly concerns an AI system and its legal compliance.
Thumbnail Image

Italia bloquea la aplicación china 'DeepSeek' por falta de información | El Diario Vasco

2025-01-30
El Diario Vasco
Why's our monitor labelling this an incident or hazard?
The event involves an AI system ('DeepSeek') whose development and use are under scrutiny due to lack of transparency about data usage, which implicates potential violations of data protection laws and possibly user rights. Although no direct harm is reported yet, the blocking and investigation indicate concerns about legal compliance and potential rights violations. Since no actual harm has been reported but there is a credible risk of violation of rights if the AI system continues operating without proper data governance, this qualifies as an AI Hazard rather than an AI Incident.
Thumbnail Image

Italia bloquea la aplicación china 'DeepSeek' por falta de información | Ideal

2025-01-30
Ideal
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (the 'DeepSeek' chatbot) whose use of personal data is under scrutiny by Italian authorities. The blocking is a regulatory response due to lack of transparency, aiming to prevent possible violations of data protection laws. Since no realized harm or incident is described, but there is a plausible risk of privacy harm, this qualifies as an AI Hazard. The involvement is about use and compliance, with potential for harm if unaddressed. The article also mentions similar scrutiny in France, reinforcing the regulatory concern but no incident has occurred yet.
Thumbnail Image

Garante Privacy limita DeepSeek: trattamento dati utenti italiani bloccato

2025-01-30
Webnews
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek chatbot) whose use (processing of personal data) has led to a violation of fundamental rights under applicable law (GDPR), specifically privacy rights of individuals. This constitutes a breach of obligations intended to protect fundamental rights, which fits the definition of an AI Incident. The limitation and investigation are direct responses to realized harm (privacy violations) caused by the AI system's data processing practices. Therefore, this is classified as an AI Incident.
Thumbnail Image

Garante Privacy blocca DeepSeek in Italia

2025-01-30
Punto Informatico
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI chatbot system whose use involves processing personal data. The Italian privacy authority's intervention due to insufficient compliance with data protection laws and the blocking of the service indicate that the AI system's use has led to violations of fundamental rights (privacy and data protection). The involvement of multiple European authorities and the blocking of access by organizations further confirm realized harm. The discovery of a vulnerability exposing conversation histories also indicates direct harm to users' data privacy. Hence, this is an AI Incident involving violations of human rights and legal obligations related to data protection.
Thumbnail Image

Intelligenza artificiale. Perché il Garante della privacy blocca DeepSeek in Italia

2025-01-30
avvenire.it
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek) whose development and use are under scrutiny for potential violations of data protection laws, specifically the GDPR. Although no direct harm such as injury or rights violations has been confirmed yet, the blocking and investigation indicate concerns about possible breaches of fundamental rights related to personal data privacy. Since the AI system's use could plausibly lead to violations of privacy rights if unregulated, and the authority has taken preventive action, this constitutes Complementary Information about governance and regulatory response rather than an AI Incident or Hazard. The article focuses on the regulatory measures and investigation rather than a realized harm or imminent risk of harm.
Thumbnail Image

多国给DeepSeek使用设限_手机网易网

2025-01-30
m.163.com
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (DeepSeek's AI large model) and discusses regulatory actions and national security investigations in multiple countries, reflecting concerns about data privacy and security risks. These concerns and restrictions imply plausible risks of harm (e.g., privacy violations, national security threats) that could arise from the AI system's use or deployment. However, the article does not report any actual harm or incident caused by the AI system. Therefore, the event fits the definition of an AI Hazard, as it plausibly could lead to an AI Incident but no harm has yet occurred or been reported.
Thumbnail Image

Garante Privacy blocca DeepSeek a tutela dei dati italiani - Notizie - Ansa.it

2025-01-30
Agenzia ANSA
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek chatbot) whose data processing practices have raised privacy concerns leading to regulatory intervention. Although no direct harm is reported, the regulatory action indicates potential or ongoing risks to personal data protection, which is a fundamental right. Since the limitation is a preventive measure to protect data privacy and no actual harm is described, this constitutes an AI Hazard rather than an AI Incident. The AI system's use (data processing) could plausibly lead to violations of data protection rights if unregulated, justifying the urgent limitation.
Thumbnail Image

隐私担保人阻止 DeepSeek 以保护意大利数据 - 意大利 - Ansa.it

2025-01-30
Agenzia ANSA
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (DeepSeek chatbot services) and concerns the use of personal data, with regulatory intervention to prevent potential violations of data protection rights. Although no direct harm is reported, the restriction aims to prevent possible violations of fundamental rights related to data privacy. This constitutes an AI Hazard because the development and use of the AI system could plausibly lead to a breach of legal obligations protecting fundamental rights if not properly controlled.
Thumbnail Image

GARANTE PRIVACY * INTELLIGENZA ARTIFICIALE: "BLOCCATO DEEPSEEK" - Agenzia giornalistica Opinione. Notizie nazionali e dal Trentino Alto Adige

2025-01-30
Agenzia giornalistica Opinione. Notizie nazionali e dal Trentino Alto Adige
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek chatbot) whose use (data processing of user conversations) has led to regulatory intervention due to privacy and data protection concerns. Although no direct harm such as injury or property damage is reported, the event concerns a violation of legal obligations protecting fundamental rights (data privacy rights). The blocking and investigation indicate that the AI system's use has already caused or is causing a breach of applicable law, qualifying this as an AI Incident under the definition of violations of human rights or breach of legal obligations. Therefore, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

DeepSeek: Garante Privacy dispone blocco e apre istruttoria

2025-01-30
Il Sole 24 ORE
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (chatbot DeepSeek) and concerns the processing of personal data of users, which relates to violations of data protection rights under applicable law. The limitation order and investigation indicate potential or ongoing harm related to privacy rights. Since the event describes an official regulatory intervention due to insufficient compliance and potential data protection violations, it qualifies as an AI Incident involving violations of rights (c).
Thumbnail Image

Garante Privacy: Limitazione immediata del trattamento dati per DeepSeek in Italia

2025-01-30
Quotidiano Nazionale
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek chatbot) whose data processing practices have been limited by a regulatory authority due to inadequate communication about data handling. This relates to the use of an AI system and concerns potential violations of data protection rights, which fall under violations of human rights or legal obligations protecting fundamental rights. However, the article does not indicate that actual harm has occurred yet, only that a precautionary limitation has been imposed. Therefore, this is a case of a plausible risk of harm leading to regulatory intervention, fitting the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

DeepSeek bloccato in Italia dal Garante della privacy: "A tutela dei cittadini"

2025-01-30
Affaritaliani.it
Why's our monitor labelling this an incident or hazard?
An AI system (DeepSeek chatbot) is explicitly involved, and the event concerns its use and data processing practices. The Garante's urgent limitation order and investigation indicate that the AI system's operation has led to potential violations of data protection laws, which are part of legal obligations protecting fundamental rights. Although no direct harm to individuals is explicitly reported, the event involves a regulatory response to prevent or address violations of rights related to personal data. This fits the definition of an AI Incident because it involves a breach of obligations under applicable law intended to protect fundamental rights, specifically data privacy rights, caused by the AI system's use. Therefore, the event is classified as an AI Incident.
Thumbnail Image

DeepSeek, il Garante blocca l'app cinese d'urgenza in Italia: ecco perché

2025-01-30
TGLA7
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI conversational system processing user data, and the Italian privacy authority has blocked its data processing due to non-compliance with EU law, indicating a breach of legal obligations protecting fundamental rights. The involvement of the AI system's use (data processing) has directly led to regulatory action to prevent harm to users' privacy rights. Although no explicit harm is detailed, the regulatory intervention implies that the AI system's operation has caused or risked causing violations of rights, qualifying this as an AI Incident under the framework's category (c) violations of human rights or breach of legal obligations protecting fundamental rights.
Thumbnail Image

Italia bloquea la aplicación china de inteligencia artificial 'DeepSeek' por no haber recibido la información solicitada

2025-01-30
Cadena SER
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system ('DeepSeek') with autonomous learning capabilities. Italy's data protection authority blocked the app due to insufficient transparency about training data, reflecting concerns about potential violations of data protection laws and security risks. The US Navy and White House investigations further highlight plausible security concerns. However, no direct harm or incident has been reported yet. Thus, the event fits the definition of an AI Hazard, where the AI system's use could plausibly lead to harm, but no harm has materialized yet.
Thumbnail Image

Italia bloquea app china "DeepSeek" por falta de información

2025-01-30
es.theepochtimes.com
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (DeepSeek chatbot) whose use and data practices are under scrutiny by Italian authorities. The authorities have blocked the app due to insufficient information about data collection, usage, and legal compliance, which could plausibly lead to violations of data protection rights (a form of harm to individuals). However, the article does not report any actual harm or incident caused by the AI system so far. The blocking is a preventive measure addressing potential risks. Hence, this event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

DeepSeek bloccato dal Garante della privacy: "A tutela dei dati italiani"

2025-01-30
Corriere della Sera
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system involved in data processing. The blocking by the privacy authority and the opening of an investigation indicate regulatory action due to potential or actual violations of data protection laws, which relate to violations of rights under applicable law. Since the event involves limitation of processing and investigation but does not explicitly state realized harm, it is best classified as an AI Hazard, reflecting plausible risk of harm to data privacy and rights. There is no indication of actual harm yet, only regulatory intervention to prevent or address potential harm.
Thumbnail Image

Giallo DeepSeek in Italia, app via da store online - Notizie - Ansa.it

2025-01-29
ANSA.it
Why's our monitor labelling this an incident or hazard?
An AI system (DeepSeek AI chatbot app) is involved, and its use is under scrutiny by the Italian data protection authority due to potential violations of privacy and fundamental rights. The app's removal from stores and the investigation suggest a plausible risk of harm related to data privacy and rights violations, but no actual harm or incident has been confirmed or reported. Therefore, this event represents an AI Hazard, as the AI system's use could plausibly lead to an AI Incident if violations are confirmed, but no realized harm is described yet.
Thumbnail Image

L'app di DeepSeek non è più disponibile negli store di Apple e Google in Italia

2025-01-29
Il Sole 24 ORE
Why's our monitor labelling this an incident or hazard?
The article mentions an AI system (DeepSeek generative AI app) and a regulatory authority's request for information, but does not describe any direct or indirect harm caused by the AI system, nor does it indicate plausible future harm. The removal from app stores and the privacy inquiry are responses to concerns but do not themselves constitute an AI Incident or AI Hazard. Therefore, this is best classified as Complementary Information, as it provides context on governance and regulatory response related to an AI system.
Thumbnail Image

DeepSeek: il Garante chiede chiarimenti sul trattamento dei dati

2025-01-31
MRW.it
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek chatbot, a large language model) and concerns its development and use, specifically data handling practices. However, there is no indication that any harm has occurred yet, only a potential risk to personal data privacy. The request for information is a governance and regulatory response to potential risks, not a report of an AI Incident or an imminent hazard. Therefore, this is best classified as Complementary Information, as it provides context and updates on oversight and governance related to AI data privacy issues.
Thumbnail Image

Garante chiede informazioni a DeepSeek, dati a rischio - Ultima ora - Ansa.it

2025-01-28
ANSA.it
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (the DeepSeek chatbot) and concerns about its data handling practices, which could plausibly lead to violations of privacy rights or data protection laws if mishandled. However, no realized harm or incident is described; the authority is seeking information to assess potential risks. Therefore, this is an AI Hazard, as the event concerns plausible future harm related to the AI system's use and data processing, but no direct or indirect harm has yet occurred.
Thumbnail Image

DeepSeek, dalla privacy agli attacchi hacker: le incognite sul futuro della startup

2025-01-30
Il Messaggero
Why's our monitor labelling this an incident or hazard?
The article describes an AI system (DeepSeek chatbot) that is operational but has been subject to large-scale malicious cyberattacks causing service disruption (harm to operation) and raising privacy concerns under GDPR (legal rights violations). The involvement of the AI system is explicit, and the harms include disruption of service and potential violations of privacy rights, which fall under the defined harms for AI Incidents. The ongoing investigation and the app's removal from stores in Italy further indicate realized harm rather than just potential risk. Hence, this event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

DeepSeek, l'app non è più disponibile sugli store di Apple e Google in Italia

2025-01-29
Il Messaggero
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek chatbot) whose use raises concerns about data privacy and protection. However, there is no indication that any harm has yet occurred or that the AI system has malfunctioned or been misused to cause harm. The data protection authority's inquiry and app removal are precautionary measures addressing potential risks. Therefore, this event represents a plausible risk scenario (AI Hazard) rather than an actual incident or harm. It is not merely complementary information because the app's removal and regulatory inquiry indicate a credible potential for harm.
Thumbnail Image

DeepSeek sparito dagli App Store italiani. OpenAI denuncia: "Furto di proprietà intellettuale"

2025-01-29
TGLA7
Why's our monitor labelling this an incident or hazard?
The article describes an AI system (DeepSeek) whose development and use are implicated in violations of intellectual property rights (OpenAI's models allegedly used without permission) and potential breaches of data privacy regulations (investigated by the Italian Data Protection Authority). These constitute harms under the AI Incident definition, specifically violations of intellectual property rights and legal obligations protecting personal data. The AI system's role is pivotal as the alleged misuse and data handling relate directly to the AI's training and operation. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

DeepSeek sparisce da app store Google e Apple Italia

2025-01-30
Adnkronos
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system (an AI app) whose use has led to violations of data protection laws, which are legal frameworks protecting fundamental rights. The formal complaint and subsequent removal from app stores indicate that the AI system's use has directly or indirectly caused a breach of obligations under applicable law intended to protect fundamental rights. Therefore, this qualifies as an AI Incident under the definition of violations of human rights or breach of legal obligations.
Thumbnail Image

DeepSeek, Scorza (Privacy): "È ancora disponibile su web, risposte poco chiare"

2025-01-31
Adnkronos
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (DeepSeek chatbot) whose use in Italy has led to regulatory action by the privacy authority due to concerns about data protection and user privacy. The blocking of the app and limitation of data processing indicate that the AI system's use has raised issues related to legal compliance and potentially violations of rights. However, the article does not describe any realized harm such as injury, disruption, or direct violation of rights but focuses on the regulatory response to potential or ongoing privacy violations. Therefore, this event is best classified as Complementary Information, as it provides an update on governance and regulatory response to an AI system's use and its implications for privacy rights, rather than describing a new AI Incident or AI Hazard.
Thumbnail Image

DeepSeek: non e' piu' disponibile negli store di Apple e Google in Italia (ilsole24ore.com)

2025-01-29
Borsa italiana
Why's our monitor labelling this an incident or hazard?
An AI system (DeepSeek generative AI chatbot) is involved, and the event concerns regulatory scrutiny due to potential privacy risks. However, no actual harm or incident has been reported; the app's removal and the authority's inquiry indicate a plausible risk of harm related to data privacy. Therefore, this qualifies as an AI Hazard because the development and use of the AI system could plausibly lead to an AI Incident involving violations of data protection rights, but no realized harm is described yet.
Thumbnail Image

Il Garante Privacy vuole sapere cosa fa DeepSeek con i dati degli utenti italiani

2025-01-29
DDay.it
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek chatbot) and concerns the use and processing of personal data of users, which could implicate privacy and data protection rights. However, there is no indication that any harm (such as data breaches, misuse, or violations) has occurred yet. The Garante Privacy's action is a preventive regulatory inquiry to assess risks and ensure compliance. Therefore, this event represents a plausible risk scenario but not a realized incident. It fits the definition of an AI Hazard because the development and use of the AI system could plausibly lead to harm (privacy violations) if not properly managed, but no direct or indirect harm has been reported so far.
Thumbnail Image

L'addio all'Italia di DeepSeek, almeno su Android e iOS

2025-01-30
IlSoftware.it
Why's our monitor labelling this an incident or hazard?
An AI system (DeepSeek chatbot) is involved, and its use (data collection and processing) is under scrutiny by privacy authorities for potential violations of GDPR and risks to minors and public opinion. However, no direct or realized harm has been reported yet. The removal of the app from stores is a precautionary measure following regulatory inquiries. This situation represents a plausible risk of harm related to data privacy and rights violations but not an incident with realized harm. Therefore, it qualifies as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

DeepSeeK, il garante della privacy chiede chiarezza: a rischio i dati di milioni di italiani

2025-01-28
Open
Why's our monitor labelling this an incident or hazard?
The article describes a regulatory authority's request for information regarding the data practices of an AI chatbot service. While the AI system is involved, there is no indication that any harm has yet occurred or that there is an immediate incident. The focus is on potential risks and ensuring transparency and compliance, which aligns with a governance or societal response to AI-related concerns. Therefore, this is Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

DeepSeek è scomparsa dagli app store Apple e Google in Italia

2025-01-29
Il Foglio
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (DeepSeek generative AI chatbot) that has experienced a cyber attack and is under regulatory scrutiny for data privacy compliance. While the app was taken down and there are concerns about data protection and misinformation, no actual harm (such as injury, rights violations, or operational disruption) has been reported as having occurred. The regulatory inquiry and the cyber attack highlight plausible risks of harm, including privacy violations and misinformation, but these remain potential rather than realized harms. Therefore, this event fits the definition of an AI Hazard, as it could plausibly lead to an AI Incident if the issues are confirmed or exploited further.
Thumbnail Image

DeepSeek non è più disponibile su App Store e Play Store in Italia | MilanoFinanza News

2025-01-29
Milano Finanza
Why's our monitor labelling this an incident or hazard?
The article focuses on the regulatory investigation and the company's voluntary suspension of the AI chatbot service in Italy due to privacy concerns. There is no report of actual harm or incidents caused by the AI system, nor is there a clear plausible risk of harm described. The event is about governance and societal response to AI-related privacy issues, fitting the definition of Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

IA, il Garante della Privacy chiede informazioni a DeepSeek - ItaliaOggi.it

2025-01-28
Italia Oggi
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (the DeepSeek chatbot) and concerns about its data processing practices, which could pose risks to personal data privacy. However, no actual harm or incident has been reported; rather, the privacy authority is seeking information to assess potential risks. This fits the category of Complementary Information, as it relates to governance and oversight activities rather than a realized AI Incident or a plausible AI Hazard.
Thumbnail Image

DeepSeek sotto la lente del Garante della Privacy

2025-01-29
informazione interno
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek chatbot) and concerns its development and use, specifically regarding personal data handling. However, the article only reports a regulatory authority's request for information, reflecting a governance response to potential privacy concerns. There is no indication of realized harm or incident caused by the AI system, nor a direct or indirect link to harm. Therefore, this is best classified as Complementary Information, as it provides context on societal and governance responses to AI-related privacy issues without reporting a new AI Incident or AI Hazard.
Thumbnail Image

Il Garante della Privacy si muove su DeepSeek. L'app scompare dagli store in Italia

2025-01-29
Prima Comunicazione
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (DeepSeek AI chatbot) and regulatory scrutiny over its data privacy practices. However, there is no report of actual harm or violation having occurred yet. The authority's request for information and the app's removal from stores indicate precautionary measures addressing potential risks. Therefore, this event represents a plausible risk scenario where harm could occur if data privacy is not properly managed, fitting the definition of an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because it concerns an AI system and potential data privacy risks.
Thumbnail Image

La balena cinese si è inabissata? DeepSeek sparisce dall'App Store e dal Play Store - StartupItalia

2025-01-29
Startupitalia
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (DeepSeek's chatbot) and discusses regulatory and legal scrutiny regarding possible unauthorized use of data and technology, which could imply intellectual property rights violations. However, the article does not report any actual harm or incident caused by the AI system's use or malfunction. The app's removal from stores and the investigation represent potential risks or hazards but not confirmed incidents. Therefore, this event is best classified as Complementary Information, as it provides updates on regulatory and governance responses and ongoing investigations related to the AI system, without reporting a concrete AI Incident or AI Hazard.
Thumbnail Image

L'app DeepSeek scompare dagli store di Apple e Google in Italia

2025-01-29
Business People
Why's our monitor labelling this an incident or hazard?
An AI system (DeepSeek generative AI chatbot) is clearly involved. The event stems from the use of the AI system and concerns about its data handling practices and censorship, which could lead to violations of privacy rights and data protection laws. No actual harm or legal violation has been confirmed yet, only a regulatory inquiry and app removal from stores as a precaution. This fits the definition of an AI Hazard, as the AI system's use could plausibly lead to an AI Incident (privacy violations, censorship harms) if the concerns are validated. It is not Complementary Information because the main focus is not on responses to a past incident but on the current regulatory action and potential risks. It is not unrelated because the AI system and its risks are central to the event.
Thumbnail Image

Sospetti e accuse a DeepSeek, l'app più

2025-01-29
IctBusiness
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (DeepSeek chatbot) and discusses its development and use, including data privacy concerns and intellectual property accusations. However, it does not report any realized harm such as injury, rights violations, or community harm caused by the AI system. The privacy authority's inquiry and the accusations represent potential risks and regulatory responses but do not describe an incident where harm has occurred. Therefore, this is best classified as Complementary Information, providing context and updates on AI ecosystem developments and governance responses rather than reporting a new AI Incident or AI Hazard.
Thumbnail Image

Caso DeepSeek: l'Italia indaga, OpenAi accusa, Alibaba rilancia. Cos'è Qwen2.5 Max - Notizie.com

2025-01-29
Notizie.com
Why's our monitor labelling this an incident or hazard?
The article describes a situation where DeepSeek's AI system allegedly used OpenAI's proprietary models without authorization, which constitutes a violation of intellectual property rights (harm category c). The Italian privacy authority's investigation into data handling practices indicates concerns about potential harm to personal data privacy, a violation of fundamental rights (also category c). Additionally, the reported large-scale malicious attacks causing service disruption relate to harm category d (harm to communities or services). The involvement of AI systems in these harms is explicit, and the harms are occurring or have occurred, meeting the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

DeepSeek rimossa dagli store digitali in Italia

2025-01-30
Voci di Città
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (DeepSeek AI app) whose use is under scrutiny by the Italian data protection authority for potential violations of data privacy and related rights. The removal of the app from stores and the investigation suggest a credible risk of harm (e.g., privacy violations, bias, influence on democratic processes) that could lead to an AI Incident if confirmed. However, since no direct or indirect harm has been explicitly reported as having occurred, and the focus is on regulatory inquiry and precautionary measures, this event fits the definition of an AI Hazard rather than an AI Incident. It is not merely complementary information because the removal and investigation imply a plausible risk of harm, and it is not unrelated as it clearly involves an AI system and potential legal and rights issues.
Thumbnail Image

Garante per la privacy blocca DeepSeek, 'risposte insufficienti'

2025-01-31
notiziegeopolitiche.net
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system (an AI chatbot) processing personal data of European users. The Italian privacy authority's intervention is due to violations of GDPR, which is a legal framework protecting fundamental rights. The event involves the use of the AI system leading to breaches of legal obligations and potential harm to users' privacy rights. Therefore, this qualifies as an AI Incident under the definition of violations of human rights or breach of applicable law protecting fundamental rights. The blocking and investigation confirm that harm has occurred or is ongoing, not just a potential risk. Hence, the classification is AI Incident.
Thumbnail Image

Khawatir Kebocoran Data, Taiwan Larang Pegawai Pemerintah Gunakan DeepSeek

2025-02-02
Airlangga Sebut Tiga Program Diskom Belanja yang Diluncurkan Pemerintah Mampu Gerakan Ekonomi Kerakyatan - Jawa Pos
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek) whose use is prohibited by Taiwan's government due to concerns about data leakage and national security risks. Although no actual harm has been reported, the ban is based on the plausible risk that the AI system's operation could lead to significant harm, such as breaches of information security and national security. Therefore, this event qualifies as an AI Hazard because it highlights a credible potential for harm stemming from the AI system's use.
Thumbnail Image

Ramai-ramai Blokir DeepSeek Karena Khawatir Disadap China

2025-02-04
detikinet
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system (an AI model and chatbot) that processes user data and stores it in China, where laws may compel data sharing with government intelligence. The event involves the use of this AI system and the associated risks of data leakage and espionage. The bans and restrictions by various governments and agencies indicate a credible risk that the AI system's use could lead to harms such as violations of privacy rights and national security breaches. Since the article does not report actual realized harm but focuses on the potential risks and preventive bans, this qualifies as an AI Hazard rather than an AI Incident. The event is not merely general AI news or a complementary update but a clear case of plausible future harm from the AI system's use.
Thumbnail Image

ABD'ye bir gecede 1 trilyon dolar kaybettirmişti! Yavaş yavaş harekete geçiyorlar

2025-01-31
Yeni Akit Gazetesi
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek) whose use is banned in Taiwan due to security risks and is under investigation in South Korea for data privacy concerns. These actions stem from the AI system's development and use, which could plausibly lead to violations of privacy rights and information security breaches. No direct harm or incident is reported yet, only preventive measures and investigations. Hence, it fits the definition of an AI Hazard, as the AI system's involvement could plausibly lead to an AI Incident but has not yet done so.
Thumbnail Image

Tayvan, DeepSeek'in resmi kurumlarda kullanılmasını yasakladı

2025-01-31
Milliyet
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (DeepSeek's AI application) whose operation is considered to pose security risks including information leakage and cross-border data transfer. These risks could plausibly lead to harm to the community or national security, which falls under harm to communities or property. Since the ban is a preventive measure based on credible security concerns, and no actual harm is reported as having occurred yet, this qualifies as an AI Hazard rather than an AI Incident. The article does not describe realized harm but highlights plausible future harm from the AI system's use in official institutions.
Thumbnail Image

Tayvan, Çinli DeepSeek'in Yapay Zeka Uygulamasını Resmi Kurumlarda Yasakladı

2025-01-31
Haberler
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system developed by DeepSeek and the ban is due to concerns about potential information leakage and security risks. These concerns relate to plausible future harm to information security and national security, which fits the definition of an AI Hazard. There is no indication that harm has already occurred, so it is not an AI Incident. The event is not merely complementary information or unrelated, as it involves a concrete governmental action based on AI-related security risks.
Thumbnail Image

Tayvan, DeepSeek'in resmi kurumlarda kullanılmasını yasakladı

2025-01-31
T24
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (DeepSeek's AI application) and concerns about its use leading to security risks including information leakage and threats to national information security. Although no actual harm is reported as having occurred, the ban is a preventive measure based on plausible risks of harm from the AI system's use. Therefore, this event represents an AI Hazard, as the AI system's use could plausibly lead to harm (security breaches) if not controlled.
Thumbnail Image

Önce İtalya, şimdi Tayvan... DeepSeek'e peş peşe yasaklamalar geliyor

2025-01-31
Dünya
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (DeepSeek's AI application) whose use is banned due to security risks that could plausibly lead to harm such as information leakage and threats to data privacy and national security. Since no actual harm is reported but the bans are precautionary to prevent potential incidents, this fits the definition of an AI Hazard. The event does not describe realized harm but credible potential harm from the AI system's use in sensitive government contexts, thus it is an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Tayvan, resmi kurumlarda DeepSeek kullanılmasını yasakladı - Diken

2025-01-31
Diken
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions DeepSeek as an AI application and highlights concerns about security risks including information leakage and cross-border data transfer, which could plausibly lead to harm to information security (harm to property or communities). Since the ban is a preventive measure and no actual harm or incident is reported, the event fits the definition of an AI Hazard. It is not Complementary Information because the main focus is on the ban due to potential risks, not on responses to a past incident. It is not an AI Incident because no realized harm is described.
Thumbnail Image

Tayvan, DeepSeek'in resmi kurumlarda kullanılmasını yasakladı

2025-01-31
Bianet - Bagimsiz Iletisim Agi
Why's our monitor labelling this an incident or hazard?
The article explicitly identifies DeepSeek as an AI system (a large language model) and highlights concerns about data security, cross-border data transfer, and information leakage, which could plausibly lead to harm such as violations of privacy rights or national security breaches. However, the article does not report any actual harm or incident caused by the AI system, only preventive bans and investigations. Hence, the event fits the definition of an AI Hazard, as it involves plausible future harm from the AI system's use in official institutions, but no direct or indirect harm has yet materialized.
Thumbnail Image

DeepSeek yasakları yayılır mı? İtalya ilk adımı attı, bir engel de Tayvan'dan...

2025-01-31
CNN TÜRK
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (DeepSeek chatbot) whose use has led to regulatory bans and investigations due to concerns over personal data processing and security risks. Italy's immediate access ban and investigation indicate that the AI system's use has directly or indirectly led to violations of data protection laws, constituting an AI Incident. Taiwan's ban on official use and South Korea's planned investigation reflect plausible future harms related to security and privacy, qualifying as AI Hazards. Given that incidents take precedence over hazards, and Italy's ban is active due to realized concerns, the overall classification is AI Incident.
Thumbnail Image

Tayvan'dan DeepSeek'e yasaklama

2025-01-31
Star.com.tr
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (DeepSeek's AI application, a large language model) and the government's decision to ban its use in official institutions due to security concerns including data leakage and cross-border data transfer. These concerns relate to plausible future harms to information security and possibly critical infrastructure. Since no actual harm has been reported, and the event centers on the potential risk and preventive measures, this fits the definition of an AI Hazard rather than an AI Incident. It is not merely complementary information because the main focus is on the risk and ban, not on updates or responses to a past incident. Therefore, the classification is AI Hazard.
Thumbnail Image

Taiwan verbiedt overheidsdiensten DeepSeek te gebruiken

2025-02-01
Nederlands Dagblad
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system (an AI chatbot) that processes user input and generates responses. The concern is that data entered into DeepSeek could be stored on servers in China and accessed by the Chinese government, posing a risk to national security and privacy. Although no direct harm has been reported, the event describes a credible risk that the AI system's use could lead to harm (e.g., leakage of sensitive government information). Therefore, this event qualifies as an AI Hazard because it plausibly could lead to harm but no actual harm is described as having occurred yet. The government's ban is a response to this plausible risk, not a report of an incident that has already caused harm.
Thumbnail Image

Taiwan verbiedt DeepSeek in overheidsinstellingen wegens nationale veiligheid

2025-02-03
Business AM
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system (an AI application) involved in data processing and collection. The ban and investigations stem from concerns about the use of this AI system leading to potential violations of data privacy and national security, which relate to violations of rights and harm to communities. While no direct harm has been confirmed, the article reports actual regulatory actions and government bans due to these risks, indicating that the AI system's use has already led to significant concerns and restrictions. This fits the definition of an AI Hazard because the harms are plausible and credible but not confirmed as realized incidents. The article does not describe a realized harm event caused by DeepSeek but focuses on the potential risks and regulatory responses. Therefore, the event is best classified as an AI Hazard.
Thumbnail Image

AP in gesprek met Europese toezichthouders over DeepSeek

2025-02-05
Dutch IT Channel
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system whose use has resulted in a data breach exposing millions of user conversations, constituting harm to individuals' privacy and a violation of data protection rights. Multiple countries have taken regulatory actions, and the Dutch data protection authority is investigating and coordinating with European counterparts. The direct link between the AI system's use and realized harm to privacy rights meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Waarom verbieden overheden DeepSeek? Dit zijn de veiligheidsrisico's

2025-02-04
Techopedia.com
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system explicitly mentioned as being used and banned due to its data processing and transmission practices. The article details realized harms and credible risks, such as data leaks and unauthorized data sharing with foreign intelligence, which constitute violations of privacy rights and pose security threats. These harms fall under violations of human rights and legal obligations protecting personal data. The bans by governments and organizations are responses to these harms. Therefore, this event qualifies as an AI Incident because the AI system's use has directly or indirectly led to significant harms related to privacy and security.
Thumbnail Image

Australië verbiedt overheidsdiensten gebruik AI-tool DeepSeek

2025-02-04
Nederlands Dagblad
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek) and concerns about its use leading to national security risks and potential unauthorized access to user data, which could be a violation of rights and security. However, no actual harm or incident has occurred yet; the government is acting preemptively to prevent possible harm. Therefore, this qualifies as an AI Hazard, as the AI system's use could plausibly lead to an AI Incident involving harm to rights or security, but no direct or indirect harm has been reported so far.
Thumbnail Image

Taiwan verbiedt overheidsdiensten DeepSeek te gebruiken

2025-02-01
RTL Nieuws & Entertainment
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system (an AI chatbot) that processes user input and stores data on servers in China, raising concerns about unauthorized access to sensitive information. The Taiwanese government's ban is a preventive measure to avoid potential harm to national security and privacy, which could be considered harm to communities or violation of rights if data were accessed improperly. However, the article does not report any actual harm occurring yet, only the plausible risk of harm. Therefore, this event is best classified as an AI Hazard, as it involves the plausible future risk of harm due to the AI system's use, but no incident has yet materialized.
Thumbnail Image

Censuur, privacygevaar en datadiefstal: DeepSeek bewijst het belang van Europese AI

2025-02-05
MT/Sprout
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (DeepSeek, ChatGPT, Copilot) and their involvement in privacy breaches (data leaks), censorship (limiting information about Tiananmen protests), and potential cybersecurity risks (Trojan horse analogy). These constitute violations of privacy and human rights, which are recognized harms under the AI Incident definition. The harms have already occurred (e.g., a million conversations leaked), so this is not merely a potential risk but an actual incident. Therefore, the event qualifies as an AI Incident due to realized harm caused directly or indirectly by AI systems.
Thumbnail Image

Australië en Taiwan verbieden gebruik van DeepSeek door...

2025-02-05
Dutch IT Channel
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system whose use by government agencies is banned due to perceived security risks, including data potentially falling into foreign government hands and censorship concerns. These concerns relate to possible violations of rights and harm to government data security, but no actual harm or incident has been reported yet. The article focuses on preventive actions and investigations, indicating a plausible risk rather than a realized incident. Therefore, this event qualifies as an AI Hazard, as the AI system's use could plausibly lead to harm, but no direct or indirect harm has yet occurred.
Thumbnail Image

DeepSeek'i kısıtlayan veya soruşturan ülkeler

2025-02-03
euronews
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (DeepSeek-R1) and its use being banned or investigated by various governments due to security and privacy concerns. These concerns relate to potential violations of data protection laws and national security risks, which fall under plausible future harms. No actual harm or incident is described as having occurred, only preventive measures and investigations. Hence, this is an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

DeepSeek, İtalya'dan sonra Tayvan'da da yasaklandı

2025-02-04
Ensonhaber
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system involved in data collection and processing. The event concerns its use in government and critical infrastructure, which has been banned due to plausible risks of data privacy violations and potential misuse of sensitive information. Although no direct harm has been reported, the ban reflects a credible risk that the AI system's use could lead to violations of rights and harm to critical infrastructure management. Therefore, this qualifies as an AI Hazard, as the event highlights plausible future harm from the AI system's use in sensitive contexts.
Thumbnail Image

İtalya ve Avustralya'nın ardından Güney Kore de Deepseek'i yasakladı

2025-02-06
Sputnik Türkiye
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI application, so an AI system is involved. The event concerns the use of this AI system and the associated risks of personal data collection and surveillance, which could plausibly lead to violations of privacy rights and security breaches. Since no actual harm has been reported yet, but credible concerns and preventive bans have been implemented, this constitutes an AI Hazard rather than an AI Incident. The article focuses on the potential risks and governmental responses to mitigate them, not on realized harm or incidents.
Thumbnail Image

Avustralya, DeepSeek'i hükümet sistemlerinde ve cihazlarında yasakladı Yazar Foreks

2025-02-05
Investing.com Türkiye
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system (a chatbot) whose use in government systems is banned due to unacceptable security risks. The event involves the use of an AI system and the government's response to mitigate potential harm. No actual harm is described, but the ban is based on plausible future risks to data security and national security. Therefore, this event fits the definition of an AI Hazard, as it concerns an AI system whose use could plausibly lead to harm, prompting preventive action by authorities.
Thumbnail Image

Yapay zeka dünyasında DeepSeek krizi: Hangi ülkeler yasakladı?

2025-02-04
NTV
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek's AI application) whose use has led to government bans and investigations due to security and privacy risks. These risks relate to violations of data protection laws and fundamental rights, which are harms under the AI Incident definition (violations of human rights or breach of legal obligations). The bans indicate that harm or risk of harm has materialized or is ongoing, not just a potential future hazard. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Güvenlik endişeleri arttı: DeepSeek, verileri Çin'e mi aktarıyor?

2025-02-06
NTV
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (DeepSeek) whose use has led to direct harm in the form of privacy violations and potential breaches of data protection laws and national security. The data being sent to China and the hidden tracking mechanisms indicate misuse or harmful use of the AI system. The bans and legislative proposals further confirm the recognition of harm. Hence, this is an AI Incident as the AI system's use has directly led to violations of rights and significant harm to users' privacy and security.
Thumbnail Image

ABD, Çin'in yapay zekası DeepSeek'i tartışıyor: Kullananlara hapis cezası talep etti!

2025-02-07
NTV
Why's our monitor labelling this an incident or hazard?
The article centers on the potential risks and regulatory responses to the use of DeepSeek, a Chinese AI chatbot, including proposed bans and penalties due to privacy and national security concerns. There is no indication that any harm has yet occurred from the AI system's use, only that authorities are considering or have enacted preventive measures. Therefore, this event represents a plausible future risk scenario (AI Hazard) rather than an actual incident. However, since the article mainly reports on legislative and political responses rather than a specific event of harm or a near-miss, it is best classified as Complementary Information, as it provides context and updates on societal and governance responses to AI without describing a concrete AI Incident or Hazard.
Thumbnail Image

Avustralya da DeepSeek'i Yasakladı: İşte Nedeni!

2025-02-05
Webtekno
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system, as it is described as a Chinese AI startup. The Australian government banned its use citing security risks, which implies potential harm related to data security or privacy. No direct harm has been reported yet, but the ban indicates a credible risk that the AI system could lead to harm if used. Therefore, this event fits the definition of an AI Hazard, as it involves the use of an AI system that could plausibly lead to harm, prompting preventive regulatory action. There is no indication of realized harm or incident, so it is not an AI Incident. The article does not focus on responses to a past incident or broader ecosystem updates, so it is not Complementary Information. It is clearly related to AI, so it is not Unrelated.
Thumbnail Image

Fırtınalar kopartan yapay zekaya hükümet yasağı geldi

2025-02-06
CHIP Online
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek) whose use is banned by a government due to identified security risks. The article does not report any realized harm or incident caused by the AI system but highlights credible concerns about potential security and privacy harms. The government's preventive action and similar bans by other countries indicate a credible risk of future harm. Hence, the event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

DeepSeek yasakları devam ediyor: Avustralya'da furyaya katıldı

2025-02-05
hisse.net
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI chatbot, so an AI system is involved. The bans and access restrictions are due to concerns about data security risks, implying a plausible risk of harm if the system were used, but no actual harm or incident is described. Therefore, this event represents an AI Hazard, as the development and use of DeepSeek could plausibly lead to security-related harms, prompting preventive government actions. It is not an AI Incident because no harm has occurred, nor is it Complementary Information or Unrelated.
Thumbnail Image

Αρχή Προστασίας Δεδομένων: Ξεκινά έρευνα για το DeepSeek -Στο μικροσκόπιο και το WhatsΑpp

2025-02-06
ekriti.gr
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (DeepSeek) and concerns about its compliance with data protection regulations, which relates to potential violations of fundamental rights (privacy). The investigation is about the legality and potential breaches, indicating possible or ongoing harm to personal data rights. The WhatsApp data breach involves malware but is not explicitly linked to AI. Since the article focuses on the investigation into AI system legality and data protection violations, this constitutes an AI Incident due to the direct or indirect harm to rights through AI system use and data protection concerns.
Thumbnail Image

Αρχή Προστασίας Δεδομένων: Ξεκίνησε έρευνα για DeepSeek και κακόβουλο λογισμικό του WhatsApp | Protagon.gr

2025-02-06
Protagon.gr
Why's our monitor labelling this an incident or hazard?
The article mentions an AI system (DeepSeek) and a malware incident affecting WhatsApp users. However, it does not report any realized harm caused by the AI system or the malware, only that investigations have started. There is no indication that the AI system has directly or indirectly caused harm yet, nor that the malware involves AI. Therefore, this is a case of ongoing regulatory and investigative activity without confirmed harm or plausible future harm detailed. This fits the category of Complementary Information, as it provides updates on governance and oversight related to AI and data protection but does not describe a specific AI Incident or AI Hazard.
Thumbnail Image

DeepSeek: Ξεκινά έρευνα από την Αρχή Προστασίας δεδομένων

2025-02-06
Ελεύθερος Τύπος
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system, DeepSeek, which is a large language model-based AI chatbot. The Data Protection Authority's investigation concerns the legality of its data processing practices under GDPR, implying potential violations of personal data rights. The blocking of the AI chatbot by government agencies and companies due to fears of data leakage further indicates plausible risks to fundamental rights. Since the investigation and blocking are responses to realized or credible risks of harm to personal data rights, this constitutes an AI Incident involving violations of rights under applicable law. The article does not describe actual harm events in detail but the investigation and blocking actions imply that harm or legal breaches have occurred or are strongly suspected, meeting the criteria for an AI Incident rather than a mere hazard or complementary information.
Thumbnail Image

Προσωπικά δεδομένα: Ερευνα για DeepSeek και WhatsApp | Η ΚΑΘΗΜΕΡΙΝΗ

2025-02-07
Η ΚΑΘΗΜΕΡΙΝΗ
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (DeepSeek chatbot) and AI-related software (Graphite surveillance software) involved in data processing and potential privacy violations. The Greek authority's investigation is a governance response to possible breaches of data protection laws (GDPR). No confirmed harm or violation has been established yet; the article focuses on the initiation of investigations and regulatory scrutiny. Thus, it does not describe a realized AI Incident or a plausible AI Hazard but rather a Complementary Information event about ongoing oversight and potential regulatory actions.
Thumbnail Image

Η Αρχή Προστασίας Δεδομένων ξεκίνησε έρευνα για το DeepSeek - Στο μικροσκόπιο και κακόβουλο λογισμικό του WhatsApp

2025-02-06
The TOC
Why's our monitor labelling this an incident or hazard?
The investigation concerns the use of an AI system (DeepSeek) and a data breach involving malicious software affecting personal data. However, the article does not report any realized harm or incident caused by the AI system DeepSeek or the malicious software; rather, it reports ongoing investigations. Therefore, this is not an AI Incident or AI Hazard but a governance and regulatory response providing complementary information about AI-related developments and data protection concerns.
Thumbnail Image

Έρευνα και στην Ελλάδα για παραβίαση δεδομένων χρηστών του Whatsapp - ERT Open

2025-02-06
ertopen.com
Why's our monitor labelling this an incident or hazard?
The spyware Paragon is a malicious software tool that likely uses AI or advanced algorithmic techniques to target individuals, including journalists and activists, leading to unauthorized data access and privacy breaches. This constitutes a violation of fundamental rights and personal data protection laws. The involvement of AI-related technology in the spyware and the resulting harm to individuals' rights and privacy meets the criteria for an AI Incident. The investigation into DeepSeek, an AI application, also relates to legal compliance but does not negate the primary incident of harm caused by the spyware.
Thumbnail Image

DeepSeek: Υπό διερεύνηση από την ελληνική Αρχή Δεδομένων

2025-02-06
xronos.gr
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (DeepSeek chatbot) and concerns about its use potentially violating GDPR, which protects fundamental rights related to personal data. Since the investigation is ongoing and no confirmed harm or breach has been established yet, this situation represents a plausible risk of harm (legal rights violation) rather than a realized incident. Therefore, it fits the definition of an AI Hazard, as the AI system's use could plausibly lead to an AI Incident if violations are confirmed.
Thumbnail Image

ΑΠΔΠΧ: Έρευνα για DeepSeek και κακόβουλο λογισμικό του WhatsApp

2025-02-06
Business Daily
Why's our monitor labelling this an incident or hazard?
The AI system DeepSeek is explicitly mentioned as an AI application under investigation for compliance with data protection laws, indicating its development or use is under scrutiny. The malicious software affecting WhatsApp users involves a data breach, which constitutes harm to personal data privacy, a violation of rights under applicable law. However, the article describes ongoing investigations and does not report realized harm directly caused by DeepSeek or the malicious software, but rather the potential or alleged breaches. Therefore, this event is best classified as Complementary Information, as it provides updates on investigations and regulatory responses related to AI and data privacy incidents, rather than reporting a confirmed AI Incident or AI Hazard.
Thumbnail Image

Αρχή Προστασίας Δεδομένων Προσωπικού Χαρακτήρα: Ξεκίνησε έρευνα για DeepSeek και WhatsApp - Mononews.gr

2025-02-06
Αποκαλυπτικό ρεπορτάζ, αρθρογραφία και άμεση ενημέρωση, με όλα τα τελευταία νέα και ειδήσεις για την Οικονομία, τις Επιχειρήσεις, το Χρηματιστήριο, το Bitcoin, τις πολιτικές εξελίξεις και τον πολιτισμό
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions DeepSeek, an AI application, and an investigation into its GDPR compliance, indicating AI system involvement in development or use. The WhatsApp data breach involves malicious software but does not explicitly confirm AI involvement. No direct or indirect harm caused by AI systems has been confirmed yet; the investigations are preliminary. Hence, this is not an AI Incident or AI Hazard but rather Complementary Information about regulatory and legal responses to potential AI-related privacy concerns.
Thumbnail Image

Αρχή Προστασίας Δεδομένων: Ξεκινά έρευνα για το DeepSeek

2025-02-06
myportal.gr
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (DeepSeek) and discusses its use and data handling practices. The investigation by the Data Protection Authority and cybersecurity experts' warnings indicate plausible risks of harm, including violations of personal data rights and cybersecurity threats. No actual harm or incident is reported yet, only potential risks and an ongoing inquiry. Hence, it fits the definition of an AI Hazard, where the AI system's use could plausibly lead to an AI Incident but has not yet done so.
Thumbnail Image

Αρχή Προστασίας Δεδομένων: Ξεκίνησε αυτεπάγγελτη έρευνα για DeepSeek-Στο μικροσκόπιο το WhatsΑpp

2025-02-06
emakedonia.gr
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI application under investigation for possible violations of data protection regulations, indicating a plausible risk of harm related to AI use, qualifying it as an AI Hazard. There is no indication that harm has already occurred due to DeepSeek, so it is not an AI Incident. The WhatsApp data breach involves malware but does not mention AI systems, so it is unrelated to AI. The article mainly reports on investigations and regulatory responses, not on realized harm or incidents caused by AI.
Thumbnail Image

Αρχή Προστασίας Δεδομένων: Ξεκίνησε έρευνα για το DeepSeek και για κακόβουλο λογισμικό του WhatsApp

2025-02-06
Newpost.gr
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI application (DeepSeek) and a data breach involving malicious software affecting WhatsApp users. The investigation concerns potential violations of personal data rights, which fall under human rights and legal obligations. Since the event involves an AI system (DeepSeek) and concerns realized or alleged violations of data protection rights, it qualifies as an AI Incident. The WhatsApp malware case also involves harm to personal data rights. Therefore, the event is best classified as an AI Incident due to the direct or indirect harm to fundamental rights through AI system use or misuse.
Thumbnail Image

Στο στόχαστρο της Αρχής Προστασίας Δεδομένων Προσωπικού Χαρακτήρα η κινεζική DeepSeek

2025-02-06
Insider
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI application (DeepSeek) and a regulatory investigation into its compliance with data protection laws. However, there is no indication that the AI system's development, use, or malfunction has directly or indirectly caused harm yet. The investigation is about potential legal compliance issues, not about realized harm. Similarly, the WhatsApp data breach involves malicious software but does not specify AI involvement or harm caused by AI systems. Therefore, this event is best classified as Complementary Information, as it provides context and updates on regulatory scrutiny related to AI and data protection but does not describe an AI Incident or AI Hazard.
Thumbnail Image

Έρευνες από την Αρχή Προστασίας Δεδομένων για το DeepSeek

2025-02-06
Newsbomb
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (DeepSeek) and a regulatory authority's investigation into its compliance with data protection laws. However, there is no indication that any harm has occurred or that the AI system has caused or could plausibly cause harm. The event is about governance and oversight, providing complementary information about societal and legal responses to AI deployment rather than reporting an incident or hazard.
Thumbnail Image

DeepSeek: Ερευνα από την Αρχή Προστασίας Δεδομένων για τη νομιμότητα της εφαρμογής | Η ΚΑΘΗΜΕΡΙΝΗ

2025-02-06
Η ΚΑΘΗΜΕΡΙΝΗ
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (DeepSeek, an AI chatbot) whose development and use are under scrutiny for potential violations of data protection laws and risks to personal data privacy. Although no realized harm or confirmed data breach is reported, the investigation and expressed concerns about inadequate safeguards and unclear data handling practices indicate a credible risk that could plausibly lead to violations of fundamental rights (privacy) under applicable law. Therefore, this event qualifies as an AI Hazard rather than an AI Incident, as the harm is potential and under investigation.
Thumbnail Image

H Αρχή Προστασίας Δεδομένων αρχίζει έρευνα για την εφαρμογή Τεχνητής Νοημοσύνης DeepSeek στην Ελλάδα

2025-02-06
parapolitika.gr
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (DeepSeek) and a regulatory investigation into its compliance with data protection laws. However, there is no indication that harm has already occurred due to the AI system's development, use, or malfunction. The investigation is a precautionary measure to assess legality and potential risks. The data breach involving malicious software is mentioned but not linked to AI. Therefore, this event represents a potential risk scenario or regulatory scrutiny rather than a realized harm or incident. Hence, it qualifies as Complementary Information, as it provides context and updates on governance and oversight related to AI without describing a specific AI Incident or AI Hazard.
Thumbnail Image

DeepSeek: Έρευνα από την Αρχή Προστασίας Προσωπικών Δεδομένων

2025-02-06
Skai.gr
Why's our monitor labelling this an incident or hazard?
The article mentions an AI system (DeepSeek) and an investigation into its compliance with data protection laws, which relates to the development or use of an AI system. However, there is no indication that any harm has yet occurred or that the AI system has directly or indirectly caused harm. The investigation itself is a governance response and does not describe a realized incident or a plausible future harm from the AI system. Similarly, the WhatsApp malware case is related to data breach but does not specify AI involvement. Therefore, this is best classified as Complementary Information, as it provides updates on regulatory scrutiny and governance responses related to AI and data protection.
Thumbnail Image

Η Αρχή Προστασίας Δεδομένων ξεκινά έρευνα για την παραβίαση στο WhatsApp και το DeepSeek - Dnews

2025-02-06
dnews.gr
Why's our monitor labelling this an incident or hazard?
The article mentions an AI system (DeepSeek) and a data breach involving WhatsApp users. The investigation into DeepSeek concerns its legality under data protection laws, implying potential rights violations, but no actual harm or incident is reported yet. The WhatsApp breach involves malware but does not specify AI involvement. Since the article focuses on the initiation of an investigation and not on a confirmed AI Incident or AI Hazard, it fits the definition of Complementary Information, providing context and updates on AI-related regulatory actions without describing a new harm or plausible future harm caused by AI.
Thumbnail Image

Στο μικροσκόπιο η DeepSeek από την ελληνική Αρχή Δεδομένων | Parallaxi Magazine

2025-02-06
Parallaxi Magazine
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (DeepSeek chatbot) and concerns about its use and data handling practices. However, the event is about an investigation into possible violations and has not confirmed any actual harm or breach yet. Therefore, it represents a plausible risk or potential for harm rather than a realized incident. This fits the definition of an AI Hazard, as the AI system's use could plausibly lead to violations of data protection laws and associated harms, but no direct or indirect harm has been confirmed at this stage.
Thumbnail Image

DeepSeek: Έρευνα από την Αρχή Προστασίας Προσωπικών Δεδομένων

2025-02-06
News 24/7
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI application (DeepSeek) and concerns about potential violations of data protection laws due to its use, which involves personal data processing. However, the article does not report any realized harm or confirmed violation yet; it only describes an ongoing investigation into possible legal breaches and data privacy issues. There is no indication that harm has already occurred or that the AI system has malfunctioned or been misused to cause harm. Therefore, this event represents a plausible risk or potential harm scenario related to AI use, qualifying it as an AI Hazard rather than an Incident. The WhatsApp malware case is mentioned but is not clearly linked to AI systems, so it does not affect the classification.
Thumbnail Image

Αρχίζει έρευνα για το DeepSeek στην Ελλάδα

2025-02-06
Αθήνα 9,84
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek) and concerns its compliance with data protection laws, which relates to potential violations of rights. However, the article describes the initiation of an investigation rather than a realized harm or incident. Therefore, this is a plausible risk scenario (hazard) rather than an incident. The WhatsApp malware case is related to data breach but does not explicitly involve AI, so it is not classified as AI-related harm here.
Thumbnail Image

Στο μικροσκόπιο της ελληνικής Αρχής Δεδομένων η DeepSeek | in.gr

2025-02-06
in.gr
Why's our monitor labelling this an incident or hazard?
The DeepSeek chatbot is an AI system, and the investigation concerns whether its development or use has led to violations of data protection laws (a breach of obligations under applicable law protecting fundamental rights). Although no confirmed harm is reported yet, the investigation is triggered by complaints and concerns about potential rights violations. Since the event focuses on the investigation of possible legal breaches and data privacy issues related to AI use, it fits best as Complementary Information, providing context and updates on governance and regulatory responses to AI-related concerns. There is no direct or confirmed harm reported, so it is not an AI Incident. It is also not merely unrelated news or a product launch, but a governance response to potential AI-related legal issues.
Thumbnail Image

Έρευνα της Αρχής Προστασίας Δεδομένων για την εφαρμογή Τεχνητής Νοημοσύνης DeepSeek

2025-02-06
NewsIT
Why's our monitor labelling this an incident or hazard?
The article mentions an AI system (DeepSeek) and an ongoing investigation by the data protection authority regarding its legality under data protection laws. However, there is no indication that the AI system has caused any direct or indirect harm yet. The investigation is a precautionary or regulatory action, which fits the definition of Complementary Information as it provides context and governance response without reporting a realized incident or hazard. The WhatsApp data breach involves malicious software but does not specify AI involvement, so it is unrelated to AI incidents or hazards.
Thumbnail Image

Αρχίζει έρευνα για το DeepSeek στην Ελλάδα: Κακόβουλο λογισμικό "μόλυνε" χρήστες του WhatsApp

2025-02-06
Reader
Why's our monitor labelling this an incident or hazard?
The article mentions an AI system (DeepSeek) and a data breach involving malware affecting WhatsApp users. However, it does not specify that the AI system caused or contributed to the malware incident or the data breach. The investigation concerns the legality of the AI system's data processing under GDPR, which is a governance/legal response rather than a direct or indirect harm caused by the AI system. The malware incident is related but not explicitly linked to AI system malfunction or misuse. Therefore, this is best classified as Complementary Information, as it provides context on regulatory scrutiny and data protection issues related to AI and cybersecurity, without describing a direct AI Incident or plausible AI Hazard.
Thumbnail Image

DeepSeek: Έρευνα στην Ελλάδα για την εφαρμογή Τεχνητής Νοημοσύνης

2025-02-06
ant1news.gr
Why's our monitor labelling this an incident or hazard?
The presence of an AI system (DeepSeek) is explicit, and the investigation concerns its legality under data protection laws, which relates to the development or use of the AI system. However, the article does not describe any actual harm or violation caused by the AI system, only that an investigation is underway. Similarly, the WhatsApp data breach involves malicious software but does not specify AI involvement or harm caused by AI. Since the article mainly reports on regulatory actions and investigations without describing a realized AI-related harm or a plausible future harm, it fits the definition of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

Αρχή Προστασίας Δεδομένων: Έρευνα για DeepSeek και κακόβουλο λογισμικό του WhatsApp

2025-02-06
Madata.GR
Why's our monitor labelling this an incident or hazard?
The article mentions an AI system (DeepSeek) and a data breach involving malicious software affecting WhatsApp users. Both relate to potential violations of personal data protection rights, which fall under human rights violations. However, the article only states that investigations have started; it does not report that harm has already occurred or that the AI system or malware caused direct or indirect harm. Thus, it does not meet the criteria for an AI Incident or AI Hazard. Instead, it is an update on ongoing regulatory scrutiny and legal processes, fitting the definition of Complementary Information.
Thumbnail Image

Στο στόχαστρο της ελληνικής Αρχής Δεδομένων το WhatsApp και η DeepSeek

2025-02-06
Fpress.gr
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (DeepSeek chatbot and WhatsApp) and their handling of personal data, which falls under the scope of AI systems as defined. The Greek authority's investigation is due to potential GDPR violations, which are breaches of legal obligations protecting fundamental rights. Although no actual harm is confirmed yet, the ongoing investigations and prior regulatory actions in other countries indicate a credible risk of harm. Since the event concerns potential future harm rather than confirmed incidents, it fits the definition of an AI Hazard rather than an AI Incident. The article does not primarily focus on responses or updates to past incidents but on the initiation of investigations, so it is not Complementary Information. Hence, the classification is AI Hazard.
Thumbnail Image

Διπλή έρευνα της Αρχής για τα Προσωπικά Δεδομένα σε whatsApp και DeepSeek

2025-02-06
ΤΟ ΒΗΜΑ
Why's our monitor labelling this an incident or hazard?
The article mentions an AI system (DeepSeek) under investigation for compliance with data protection regulations, implying potential risks related to personal data privacy. However, it does not report any actual harm or confirmed incident caused by the AI system. Therefore, this situation represents a plausible risk or concern that could lead to harm if violations are found, fitting the definition of an AI Hazard. The WhatsApp issue involves malicious software but does not explicitly mention AI involvement. Since the main new AI-related element is the investigation of DeepSeek, and no harm has yet occurred, the classification is AI Hazard.
Thumbnail Image

Αρχή Προστασίας Δεδομένων: Έρευνα για DeepSeek και κακόβουλο λογισμικό του WhatsApp

2025-02-06
topontiki.gr
Why's our monitor labelling this an incident or hazard?
The AI application DeepSeek is explicitly mentioned and is under investigation for compliance with data protection laws, which relates to the development or use of an AI system. However, there is no indication that any harm has yet occurred or that the AI system has directly or indirectly led to harm. The investigation itself implies a potential risk or concern but does not confirm realized harm. Similarly, the malware incident is related to data breach but does not involve AI. Therefore, the DeepSeek case represents an AI Hazard as it plausibly could lead to an AI Incident if non-compliance or misuse is confirmed, but no harm is reported yet. The WhatsApp malware case is unrelated to AI. Since the article mainly reports on investigations and potential risks without confirmed harm, the overall classification is AI Hazard.
Thumbnail Image

Έρευνα της Αρχής Προστασίας Δεδομένων για το DeepSeek και το WhatsΑpp - Zougla

2025-02-06
Zougla.gr (official)
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (DeepSeek) and concerns about data protection compliance, which relates to potential violations of rights. However, no actual harm or confirmed violation is reported yet; the authority has only started an investigation. The WhatsApp data breach involves malware, but there is no indication that AI caused or contributed to the breach. Therefore, this is a plausible risk or potential issue under investigation, fitting the definition of an AI Hazard for DeepSeek and unrelated or complementary for the WhatsApp malware case. Since the main focus is on the investigation and potential legal issues rather than confirmed harm, the classification is AI Hazard.
Thumbnail Image

Έρευνα για το DeepSeek και για κακόβουλο λογισμικό του WhatsΑpp ξεκίνησε η Αρχή Προστασίας Δεδομένων Προσωπικού Χαρακτήρα

2025-02-06
STARTUPPER
Why's our monitor labelling this an incident or hazard?
The DeepSeek AI application is under investigation for compliance with data protection laws, indicating concerns about possible violations of personal data rights due to its AI use. This suggests potential or ongoing harm to fundamental rights, fitting the definition of an AI Incident if harm is realized or an AI Hazard if only potential. The WhatsApp malware incident involves a data breach affecting users, which is a violation of personal data rights. Since the breach has occurred and affects users, it constitutes an AI Incident related to harm to rights. Therefore, the overall event includes at least one AI Incident (WhatsApp data breach) and a potential AI Incident or Hazard (DeepSeek investigation). Given the realized harm in the WhatsApp case, the classification is AI Incident.
Thumbnail Image

澳洲籲公民用DeepSeek要非常謹慎 義大利憂個資外洩 | 聯合新聞網

2025-01-30
聯合新聞網
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek) and concerns about its data collection and privacy practices. The involvement is related to the use and development of the AI system. However, the article does not report any realized harm or incident but rather governmental warnings, investigations, and requests for information to assess risks. This fits the definition of an AI Hazard, as the AI system's use could plausibly lead to harm (privacy violations and data breaches), but no direct harm has been reported yet.
Thumbnail Image

DeepSeek 爆紅後掀疑慮,世界主要國家應對措施一覽

2025-02-01
TechNews 科技新報
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system whose use has triggered significant concerns about data privacy, security, and intellectual property rights. The article details multiple countries' regulatory and governmental actions to address these concerns, indicating plausible risks of harm such as data breaches, misuse of personal data, and national security threats. Since no specific harm has yet been reported as having occurred, but credible concerns and precautionary measures are in place, this situation fits the definition of an AI Hazard rather than an AI Incident. The article primarily focuses on the potential risks and responses rather than on actual incidents of harm or violations.
Thumbnail Image

Künstliche Intelligenz: Italien sperrt Zugang zu chinesischer KI-App DeepSeek

2025-01-31
Handelsblatt
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek, a Chinese AI chatbot) whose use has raised concerns about data privacy and legal compliance. The blocking of access and investigation by the Italian authority is a governance response to potential violations but does not describe realized harm or direct incidents caused by the AI system. Therefore, this is Complementary Information about regulatory and governance actions related to AI, not an AI Incident or Hazard.
Thumbnail Image

Itália questiona DeepSeek sobre tratamento de dados; app não pode ser baixado no país

2025-01-29
Bem Paraná
Why's our monitor labelling this an incident or hazard?
An AI system (DeepSeek AI app) is involved, and the event concerns its use and data processing practices. The Italian authority's concern about potential risks to millions of people's data indicates a plausible risk of harm related to privacy and legal rights. However, no realized harm or incident has been reported so far, only a regulatory inquiry and app removal as precautionary measures. Therefore, this qualifies as an AI Hazard, reflecting a credible potential for harm but no confirmed incident yet.
Thumbnail Image

'deepseek' è un campione anche nel rastrellare dati personali per conto del regime cinese

2025-01-29
dagospia.com
Why's our monitor labelling this an incident or hazard?
An AI system (DeepSeek chatbot) is involved as it processes personal data. The data protection authority's inquiry indicates concern about potential risks to data privacy and legal compliance, which could plausibly lead to violations of rights if data misuse or unauthorized storage occurs. However, the article does not report any actual harm or confirmed violations yet, only a regulatory request for information and risk assessment. Therefore, this event fits the definition of an AI Hazard, as it plausibly could lead to an AI Incident if risks materialize, but no incident has occurred yet.
Thumbnail Image

Italiens Datenschutzbehörde sperrt Deepseek

2025-01-30
blue News
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Deepseek) whose use has led to regulatory action due to concerns about data privacy violations, which fall under violations of applicable law protecting fundamental rights. Although no direct harm such as injury or property damage is reported, the blocking of the AI system is a response to potential or ongoing violations of data protection rights. This constitutes an AI Incident because the AI system's use has directly led to a breach or risk of breach of legal obligations protecting user data rights, prompting official enforcement measures.
Thumbnail Image

DeepSeek: app rimossa in Italia, indagine del Garante

2025-01-29
CeoTech
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (DeepSeek, an AI generative app) and its use in Italy. The investigation by the privacy authority concerns the app's data collection and processing practices, which could plausibly lead to violations of privacy rights (a form of human rights violation). The app's removal from stores is a preventive action, indicating potential future harm rather than harm that has already occurred. The hacker attack on the app's servers is mentioned but does not describe realized harm caused by the AI system itself. Since no direct or indirect harm has been reported yet, but there is a credible risk of privacy violations, this event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Itália pede explicações à DeepSeek e app fica bloqueado na Play e App Store

2025-01-29
tecmundo.com.br
Why's our monitor labelling this an incident or hazard?
An AI system (DeepSeek) is involved, specifically an AI model and its data practices. The event stems from the use and development of the AI system, focusing on data privacy and regulatory compliance. However, no actual harm or violation has been confirmed or reported yet; the authorities are investigating potential risks and compliance issues. The app's removal from stores suggests precautionary measures but does not confirm realized harm. Therefore, this situation represents a plausible risk of harm related to data privacy and legal compliance, fitting the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Garante della Privacy blocca DeepSeek, stop all'IA cinese in Italia

2025-01-30
Cremonaoggi
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek chatbot) and concerns its use and data processing practices. However, there is no explicit mention of realized harm such as injury, rights violations already occurring, or other direct damages. Instead, the authority has taken preventive regulatory action (blocking and investigation) due to insufficient compliance and potential risks to data privacy, which is a fundamental right. Therefore, this event represents an AI Hazard (plausible risk of harm to rights) and a governance response rather than an AI Incident. It is not merely complementary information because the regulatory action is a primary event, but no actual harm has been reported yet.
Thumbnail Image

Il garante della Privacy blocca DeepSeek in Italia

2025-01-30
euronews
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI chatbot system that processes human conversations, thus qualifying as an AI system. The Italian privacy authority's intervention is due to concerns about the handling of user data, which implicates violations of fundamental rights (privacy). The blocking of the app and the opening of an investigation indicate that the AI system's use has led to or is strongly suspected to have led to harm or violations of rights. This fits the definition of an AI Incident because the AI system's use has directly or indirectly led to a breach or risk of breach of applicable law protecting fundamental rights. The event is not merely a potential hazard or complementary information but a concrete regulatory response to an ongoing or realized issue.
Thumbnail Image

Itália bloqueia DeepSeek no país para proteger dados

2025-01-30
ISTOÉ Independente
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek chatbot) whose use has been blocked by a regulatory authority due to concerns about data protection and legal compliance. Although no direct harm is reported, the situation presents a plausible risk of harm to users' data privacy and rights if the AI system were to continue operating without proper safeguards. Therefore, this constitutes an AI Hazard, as the AI system's use could plausibly lead to violations of data protection rights and related harms if unaddressed. There is no indication that harm has already occurred, so it is not an AI Incident. The article is not merely complementary information since it reports a regulatory action in response to potential harm, nor is it unrelated.
Thumbnail Image

Il Garante stoppa Deepseek: "Non tutela i dati degli italiani"

2025-01-31
ilGiornale.it
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system (a chatbot using AI to process human conversations). The event involves the use of this AI system and its failure to protect user data, including a security vulnerability that exposed sensitive information. This constitutes a violation of users' privacy rights, which falls under harm category (c) - violations of human rights or breach of legal obligations protecting fundamental rights. The regulatory authority's urgent intervention and limitation order confirm that harm has occurred or is ongoing. Therefore, this event qualifies as an AI Incident due to realized harm linked to the AI system's use and malfunction.
Thumbnail Image

Italia es el primer país que bloquea la IA de 'DeepSeek'

2025-01-31
MVS Noticias
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system ('DeepSeek' chatbot) and its use of personal data, which is under investigation by the Italian data protection authority. The authority's urgent blocking of the app is a regulatory measure to protect user data, indicating concern about potential legal violations. However, there is no mention of actual harm or incidents caused by the AI system's use or malfunction. The focus is on the regulatory response and investigation, which fits the definition of Complementary Information rather than an AI Incident or AI Hazard. The event enhances understanding of AI governance and data protection enforcement but does not describe a realized or plausible harm event.
Thumbnail Image

Italia bloquea la aplicación china 'DeepSeek' por falta de información | Canarias7

2025-01-31
Canarias7
Why's our monitor labelling this an incident or hazard?
An AI system ('DeepSeek') is explicitly involved, as it is an AI chatbot application. The event stems from the use and development of this AI system, specifically concerns about data collection, training data, and privacy compliance. Although no direct harm has been reported, the blocking and investigations indicate a credible risk that the AI system could lead to violations of data protection rights and privacy, which are human rights. Therefore, this situation constitutes an AI Hazard, as the AI system's use could plausibly lead to an AI Incident involving violations of rights and privacy harm. The article focuses on regulatory actions and investigations rather than reporting realized harm, so it is not an AI Incident or Complementary Information. It is not unrelated because it concerns an AI system and potential harm.
Thumbnail Image

Italiens Datenschutzbehörde sperrt DeepSeek

2025-01-30
Handelszeitung
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (DeepSeek's AI app) and the regulatory authority's intervention due to data protection concerns. While no explicit harm such as injury or rights violations is detailed as having occurred, the blocking and investigation imply potential or ongoing violations of data protection rights, which fall under human rights and legal obligations. Since the article describes an active regulatory response to a potentially harmful AI system, but does not confirm realized harm, this is best classified as Complementary Information providing context on governance and societal response to AI-related risks.
Thumbnail Image

Italia bloqueó la IA china DeepSeek, a la espera de "información"

2025-01-30
Cooperativa.cl
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek) whose operation is under scrutiny by a data protection authority due to concerns about data usage and compliance with legal frameworks. Although no direct harm has been reported yet, the blocking order reflects a plausible risk of violations of data protection rights, which are part of fundamental rights. Therefore, this constitutes an AI Hazard, as the AI system's use could plausibly lead to a breach of obligations under applicable law protecting fundamental rights if allowed to operate without sufficient oversight.
Thumbnail Image

Intelligenza artificiale, il Garante Privacy blocca Deepseek - ItaliaOggi.it

2025-01-30
Italia Oggi
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek chatbot) whose use has directly led to regulatory action by a data protection authority, indicating a violation of data protection rights (a breach of obligations under applicable law protecting fundamental rights). Additionally, the allegations of unauthorized use of proprietary AI models relate to intellectual property rights violations. These constitute realized harms linked to the AI system's use, qualifying the event as an AI Incident under the framework.
Thumbnail Image

Il Garante della privacy blocca DeepSeek. Le motivazioni dietro la clamorosa decisione

2025-01-30
Tiscali Innovazione
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek chatbot) whose data processing practices have raised regulatory concerns leading to an official intervention to limit data processing and an ongoing investigation. This indicates a potential or actual violation of data protection laws, which are legal frameworks protecting fundamental rights related to personal data privacy. Since the event describes a regulatory action due to possible breaches of privacy rights linked to the AI system's use, it constitutes an AI Incident under the category of violations of human rights or breach of applicable law protecting fundamental rights. The harm is indirect but materialized in terms of legal non-compliance and potential privacy violations affecting users' rights.
Thumbnail Image

DeepSeek bloccato dal Garante della privacy: "Provvedimento a tutela dei dati degli italiani"

2025-01-30
Il Mattino
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek chatbot) whose use has led to concerns about violations of data protection laws protecting fundamental rights of individuals. The Garante's urgent limitation of data processing and opening of an investigation indicate that the AI system's use has directly or indirectly led to a breach of obligations under applicable law intended to protect fundamental rights. Therefore, this qualifies as an AI Incident due to violation of rights (c).
Thumbnail Image

Italia bloquea la aplicación china DeepSeek por falta de información

2025-01-30
López-Dóriga Digital
Why's our monitor labelling this an incident or hazard?
The article describes a regulatory intervention where Italy blocks the use of an AI chatbot application due to lack of transparency about data usage and compliance with data protection laws. The AI system is clearly involved, and the event stems from its use and development. However, no direct or indirect harm has been reported or can be reasonably inferred as having occurred. The blocking and investigation are precautionary and governance measures to address potential risks. This fits the definition of Complementary Information, as it details societal and governance responses to AI-related concerns without reporting an actual incident or imminent hazard.
Thumbnail Image

Órgão italiano bloqueia acesso ao DeepSeek no país por falta de informações sobre uso de dados

2025-01-30
jornalfloripa.com.br
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek chatbot) whose use is being restricted by a regulatory authority due to lack of transparency about personal data usage. While no direct harm is reported, the regulatory action is a response to potential violations of data protection laws, which relate to human rights and legal obligations. Since the article describes a regulatory intervention due to concerns about data use but does not report actual harm or incident, this qualifies as Complementary Information about governance and societal response to AI use rather than an AI Incident or Hazard.
Thumbnail Image

Il Garante della privacy ha disposto alcune limitazioni che impediscono l'uso di DeepSeek in Italia - Il Post

2025-01-30
Il Post
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek chatbot) and concerns its use and data processing practices. However, the article does not describe any realized harm or incident caused by the AI system, but rather a regulatory intervention to prevent potential legal violations. This fits the definition of Complementary Information, as it provides context on governance and societal responses to AI systems, without reporting a new AI Incident or AI Hazard.
Thumbnail Image

Itália bloqueia DeepSeek alegando proteção de dados dos italianos após abrir investigação

2025-01-30
Bem Paraná
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek) whose operation is blocked due to concerns about personal data protection, which relates to potential violations of privacy rights under applicable law. However, the article does not report any realized harm or incident caused by the AI system, only the regulatory action and investigation to prevent or address possible non-compliance. Therefore, this is not an AI Incident but rather a regulatory response to potential risks. It is not merely unrelated or general news because it concerns a specific AI system and its legal compliance. It is best classified as Complementary Information since it provides updates on governance and regulatory responses to AI use and data protection issues, without reporting a direct or indirect harm incident or a plausible future harm event.
Thumbnail Image

Garante della Privacy blocca DeepSeek

2025-01-30
Adnkronos
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek chatbot) whose use has raised privacy concerns leading to regulatory action. The AI system processes personal data of users, and the authority's intervention aims to prevent potential or ongoing violations of data protection rights, which are fundamental rights under applicable law. Although no explicit harm is reported as having occurred, the urgent blocking and investigation indicate a plausible risk of violation of rights and harm to users' privacy. Therefore, this event is best classified as Complementary Information, as it reports a governance response to potential AI-related harms rather than a confirmed AI Incident or a mere hazard without regulatory action.
Thumbnail Image

Italia stacca la spina a DeepSeek: privacy e regole non si dribblano

2025-01-30
QuiFinanza
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system (an AI-powered application) whose use involves processing personal data. The Italian Garante's intervention is a direct consequence of the AI system's non-compliance with privacy laws, constituting a breach of legal obligations intended to protect fundamental rights. The blocking of the app and formal investigations indicate realized harm in terms of violations of data protection rights. Therefore, this event qualifies as an AI Incident due to the direct link between the AI system's use and violations of applicable law protecting fundamental rights (privacy).
Thumbnail Image

Itália bloqueia DeepSeek alegando proteção de dados dos italianos após abrir investigação

2025-01-30
O Dia
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek AI platform) whose operation has been blocked by a national data protection authority due to concerns about personal data processing. While no direct harm is reported, the regulatory action is a response to potential violations of data protection laws, which relate to fundamental rights. Since the article describes a regulatory intervention to prevent or address possible legal breaches rather than an actual realized harm or incident, this qualifies as Complementary Information about governance and societal response to AI-related risks rather than an AI Incident or AI Hazard.
Thumbnail Image

Garante privacy blocca DeepSeek

2025-01-30
Agenparl
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system (a relational chatbot designed to understand and process human conversations). The event involves the use of this AI system and concerns the protection of personal data, which is a fundamental right. The authority's intervention to limit data processing indicates a violation or risk of violation of data protection rights under applicable law. Since the event describes an official regulatory action in response to insufficient compliance and potential harm to users' privacy rights, it qualifies as an AI Incident involving violations of human rights or breach of legal obligations.
Thumbnail Image

Italia bloquea de forma urgente la app DeepSeek tras no recibir detalles sobre el uso de datos personales

2025-01-30
LaRepublica.pe
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system whose use involves processing personal data. The Italian regulator's decision to block the app stems from concerns about insufficient transparency and potential privacy risks, which could lead to violations of data protection rights. Since no actual harm or breach has been reported, but a high risk is identified and regulatory action taken to prevent harm, this fits the definition of an AI Hazard. The event does not describe a realized AI Incident, nor is it merely complementary information or unrelated news.
Thumbnail Image

DeepSeek bloccato in Italia dal Garante della privacy, i motivi della decisione urgente per i dati italiani

2025-01-30
Virgilio Notizie
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system (chatbot processing human conversations). The Garante della Privacy's urgent limitation is a regulatory action responding to concerns about data privacy and compliance with GDPR, which protects fundamental rights. No actual harm or incident is reported; rather, the event is a preventive measure and investigation. This fits the definition of Complementary Information, as it provides context on governance and regulatory responses to AI-related privacy concerns, rather than describing a direct or indirect AI Incident or a plausible future AI Hazard.
Thumbnail Image

IA, il Garante della Privacy blocca DeepSeek per gli utenti italiani con effetto immediato

2025-01-30
RaiNews
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (DeepSeek chatbot) whose use has directly led to a violation of data protection laws and exposure of sensitive user information, which is a breach of fundamental rights under applicable law. The regulatory authority's urgent intervention and investigation confirm the harm has occurred. Hence, this qualifies as an AI Incident due to realized harm related to privacy and data protection violations caused by the AI system's use and malfunction (bug).
Thumbnail Image

Italie : le régulateur limite l'usage de DeepSeek, ouvre une enquête sur l'utilisation des données

2025-01-30
La Croix
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek chatbot) whose use of personal data is under regulatory scrutiny. The authority's action to restrict data processing and open an investigation indicates potential or ongoing violations of data protection laws, which are part of fundamental rights. Although no explicit harm such as data breaches or misuse is detailed, the regulatory intervention implies a significant risk or occurrence of rights violations. Therefore, this qualifies as an AI Incident due to the direct involvement of an AI system in a legal rights violation context.
Thumbnail Image

Aplicación DeepSeek es bloqueada en Italia; ésta es la razón

2025-01-30
Grupo Milenio
Why's our monitor labelling this an incident or hazard?
The application 'DeepSeek' is an AI system (a chatbot AI) whose use involves processing personal data. The Italian authority's blocking action is due to the developers' failure to provide adequate information about data handling, raising concerns about violations of data protection laws (a breach of obligations under applicable law protecting fundamental rights). This constitutes an AI Incident because the AI system's use has directly led to regulatory intervention to prevent harm to users' data privacy rights. The event involves realized harm in terms of legal non-compliance and potential rights violations, not just a plausible future risk. Therefore, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

義大利憂個資外洩 要DeepSeek說明資料是否存中國 | 國際焦點 | 國際 | 經濟日報

2025-01-29
經濟日報
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (DeepSeek AI platform) and concerns about its data collection and storage practices. The Italian data protection authority's inquiry reflects a potential risk that the AI system's use or data handling could lead to violations of privacy rights, which are human rights. Since no actual harm or violation has been reported yet, but there is a credible concern that such harm could occur, this event qualifies as an AI Hazard rather than an AI Incident. It is not merely complementary information because the main focus is on the potential risk and regulatory inquiry, not on a response to a past incident.
Thumbnail Image

DeepSeek 暫時在義大利應用商店下架,或與數據監管機構調查有關

2025-01-30
Yahoo News
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system involved in data collection and AI model training. The Italian data protection authority has taken action by requesting clarifications and the app's temporary removal from app stores, indicating regulatory concern over potential privacy violations. Additionally, US investigations into unauthorized data use and national security risks highlight plausible future harms. However, no direct harm or incident is reported yet, only investigations and regulatory measures. Therefore, this event qualifies as an AI Hazard due to the credible risk of harm from data privacy breaches and national security implications.
Thumbnail Image

DeepSeek遭義大利下架 監管機構也將查資料保護

2025-01-30
Rti 中央廣播電臺
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI application, and the regulatory actions focus on its data processing practices and compliance with GDPR, which relates to legal obligations protecting fundamental rights. No direct or indirect harm has been reported yet, only the potential for such harm if data protection violations are confirmed. The app's removal and investigation indicate a plausible risk of harm, qualifying this as an AI Hazard rather than an Incident or Complementary Information. It is not unrelated because AI involvement is explicit and central to the event.
Thumbnail Image

意大利監管機構封鎖DeepSeek 保護個資

2025-01-30
The Epoch Times
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system providing chatbot services, explicitly mentioned. The Italian regulator's blocking action is due to insufficient transparency about personal data use, which implicates potential violations of data protection laws and privacy rights (a form of human rights). No actual harm is reported yet, but the regulatory intervention and investigations indicate a credible risk that the AI system's use of personal data could lead to violations if unaddressed. This fits the definition of an AI Hazard, as the event plausibly could lead to an AI Incident involving rights violations. It is not an AI Incident because no realized harm is described, nor is it Complementary Information or Unrelated.
Thumbnail Image

義大利祭緊急封鎖令 限制DeepSeek處置國民個資 | 國際焦點 | 國際 | 經濟日報

2025-01-31
經濟日報
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system processing personal data, and the Italian authority's emergency order is a governance response to potential violations of data protection rights. The article does not report actual harm or incidents caused by DeepSeek's AI system but focuses on regulatory action to prevent harm. This fits the definition of Complementary Information, as it provides important context and updates on AI governance and risk management without describing a realized AI Incident or a plausible AI Hazard leading to harm.
Thumbnail Image

義大利祭緊急封鎖令 限制DeepSeek處置國民個資

2025-01-31
Rti 中央廣播電臺
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system processing personal data, and the Italian authority's emergency restriction is a response to concerns about potential misuse or mishandling of personal data, which could lead to violations of privacy rights. No actual harm has been reported yet, but the credible risk and regulatory intervention indicate a plausible future harm scenario. Hence, this is an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

DeepSeek | 意大利即時封鎖並調查 美國會辦公室收警告勿用

2025-01-31
std.stheadline.com
Why's our monitor labelling this an incident or hazard?
The article describes an AI system (DeepSeek) whose use involves processing personal data, triggering regulatory emergency measures due to insufficient information about data handling and potential privacy risks. No explicit harm has been reported yet, but the blocking and investigation by authorities and warnings to US offices indicate a credible risk of harm to personal data rights. This fits the definition of an AI Hazard, as the AI system's use could plausibly lead to violations of fundamental rights and harm to individuals' privacy. There is no indication that harm has already occurred, so it is not an AI Incident. The article is not merely complementary information or unrelated news, as it focuses on regulatory actions due to potential AI-related harm.
Thumbnail Image

DeepSeek|意大利下「封殺令」 指未充分解釋個人資料使用資訊

2025-01-31
Yahoo News
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system whose use of personal data is under scrutiny by Italian regulators. The ban is due to inadequate explanation of data usage, which could plausibly lead to violations of privacy rights if unresolved. Since no actual harm has been reported, and the event centers on regulatory response and investigation, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because it involves an AI system and potential privacy risks.
Thumbnail Image

據報美國國會禁用DeepSeek 多國調查相關私隱風險

2025-01-31
Yahoo News
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI model, and the investigations and bans stem from concerns about privacy risks and malicious software spread, which could lead to violations of personal data rights and security breaches. Since no actual harm is reported yet, but there is a credible risk of harm, this qualifies as an AI Hazard. The event focuses on potential risks and regulatory responses rather than confirmed incidents of harm, so it is not an AI Incident. It is more than general AI news, so it is not Unrelated, and it is not merely complementary information as it introduces new concerns and regulatory actions.
Thumbnail Image

DeepSeek用戶個資處理引疑慮 韓國當局將詢問官方

2025-01-31
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek's AI chatbot) and concerns about its handling of personal data, which relates to potential violations of data protection laws and user privacy rights. However, the article describes ongoing investigations and inquiries rather than confirmed incidents of harm or violations. There is no indication that harm has already occurred, but there is a plausible risk of such harm if data protection is inadequate. Since the article focuses on regulatory inquiries and potential risks rather than actual realized harm, this qualifies as Complementary Information providing context and updates on governance responses to AI-related privacy concerns.
Thumbnail Image

中國DeepSeek恐爆資安疑慮 數發部:公部門禁用

2025-02-01
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (DeepSeek) and its use in public sector devices. The concern is about potential cybersecurity risks and data leakage, which could harm national information security. Although no actual harm is reported as having occurred, the ban and warnings indicate a credible risk that the AI system's use could plausibly lead to significant harm. Therefore, this event fits the definition of an AI Hazard, as it involves the plausible future risk of harm due to the AI system's use, but no direct harm has yet been reported.
Thumbnail Image

中國DeepSeek掀疑慮 世界主要國家紛紛採取應對

2025-01-31
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
The article describes the use and development of an AI system (DeepSeek's AI chatbot/model) that is suspected of involving unauthorized use of US AI models and raising significant data privacy and security concerns. Multiple countries are taking precautionary or regulatory measures to mitigate risks, indicating plausible future harms such as data breaches, privacy violations, and national security threats. Since no concrete harm has yet been reported but credible risks are recognized and acted upon, this situation fits the definition of an AI Hazard rather than an AI Incident or Complementary Information. The focus is on potential harms and governmental responses rather than on realized harm or incident remediation.
Thumbnail Image

義大利蘋果和谷歌商店下架DeepSeek 監管機構將調查

2025-01-30
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI application, and its data processing practices are under regulatory investigation for possible GDPR violations. The app's removal from stores is a precautionary measure amid concerns about data privacy and legal compliance. No actual harm to individuals or groups has been reported, but the situation presents a plausible risk of harm related to data privacy and rights violations if the app's practices are unlawful. Therefore, this event constitutes an AI Hazard, as it plausibly could lead to an AI Incident if violations are confirmed or harm occurs.
Thumbnail Image

涉資安 數發部:「公務機關禁DeepSeek」 專家憂機密資料回傳伺服器

2025-02-01
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system (DeepSeek, a generative AI) and the associated cybersecurity risks due to data transmission to external servers, which could lead to data leakage of confidential information. Although no actual data breach or harm has been reported yet, the risk of such harm is credible and plausible given the nature of the AI system's operation and its use in sensitive government contexts. Therefore, this event constitutes an AI Hazard, as it plausibly could lead to an AI Incident involving harm to property, communities, or violations of confidentiality and security obligations if data leakage occurs. The event is not an AI Incident because no realized harm is reported, nor is it merely complementary information or unrelated news.
Thumbnail Image

研究者驚曝「Deepseek隱私權」有盲點:個資恐被永久挪用

2025-02-01
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system (an AI model) that collects user input data, including keystroke data, which is a form of AI system use. The article highlights that the privacy policy does not clearly specify data deletion or protection measures, raising concerns about potential unauthorized access and permanent misuse of personal data. This constitutes a violation of privacy rights, which falls under violations of human rights or breach of obligations under applicable law protecting fundamental rights. Since the harm (privacy violation risk) is ongoing and realized through the system's use and data handling practices, this qualifies as an AI Incident rather than a mere hazard or complementary information.
Thumbnail Image

中國DeepSeek掀疑慮 世界各家應對一次看 | 國際 | 三立新聞網 SETN.COM

2025-01-31
setn.com
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (DeepSeek's AI chatbot/model) and discusses its development and use. The concerns raised by multiple countries relate to potential violations of data protection laws, intellectual property rights, and national security risks, which are harms covered under the AI Incident definition if realized. However, the article does not report any actual harm or incident having occurred yet, only credible concerns and precautionary measures. Hence, it fits the definition of an AI Hazard, where the AI system's use could plausibly lead to incidents involving harm to rights and security. The international governmental responses and restrictions further support the assessment of a credible potential risk rather than a realized incident.
Thumbnail Image

研究者驚曝「Deepseek隱私權」有盲點:個資恐被永久挪用 | 科技 | 三立新聞網 SETN.COM

2025-02-01
setn.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek R1) and highlights plausible risks related to privacy and data security due to the system's data collection and retention policies. No actual harm or breach has been reported yet, only potential future risks. Therefore, this qualifies as an AI Hazard because the development and use of the AI system could plausibly lead to harm (privacy violations and unauthorized data access), but no incident has occurred as described in the article.
Thumbnail Image

中國DeepSeek掀疑慮 世界主要國家紛紛採取應對 | 聯合新聞網

2025-01-31
聯合新聞網
Why's our monitor labelling this an incident or hazard?
The article focuses on the potential risks and regulatory responses to DeepSeek's AI system, including data privacy concerns and national security implications. While these concerns are serious and involve AI system use, the article does not describe any actual harm or incident caused by the AI system. Instead, it reports on investigations, restrictions, and warnings by various governments to prevent possible harms. Therefore, this event fits the definition of an AI Hazard, as the AI system's use could plausibly lead to incidents involving data breaches, privacy violations, or security risks, but no direct or indirect harm has yet been reported.
Thumbnail Image

中國DeepSeek掀疑慮 世界主要國家紛紛採取應對 | 國際焦點 | 國際 | 經濟日報

2025-01-31
經濟日報
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (DeepSeek's AI chatbot/model) and concerns about its development and use, including data privacy and security risks. However, it does not describe any specific realized harm (such as data breaches, injuries, or rights violations) directly caused by the AI system. Instead, it details various countries' precautionary and regulatory actions in response to perceived risks. This fits the definition of Complementary Information, which includes governance responses and updates on AI-related concerns without a new primary harm event. There is no direct or indirect harm reported yet, nor a near-miss or plausible immediate hazard event described. Hence, it is not an AI Incident or AI Hazard.
Thumbnail Image

數百萬個資陷高風險 意大利緊急封鎖DeepSeek | 蘋果 | 新唐人电视台

2025-01-31
NTDChinese
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system processing personal data, and the Italian authority's emergency restriction is due to concerns about potential high risk to millions of users' personal data. No actual harm is reported yet, but the plausible risk of harm to privacy and data protection rights is clear. The event is about regulatory intervention to prevent harm, fitting the definition of an AI Hazard rather than an Incident or Complementary Information. The AI system's involvement is in its use and data processing, with potential for violation of rights if unregulated.
Thumbnail Image

DeepSeek è stato bloccato in Italia

2025-01-31
MRW.it
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek chatbot based on a generative AI model) whose data processing practices have raised regulatory concerns leading to an urgent limitation order. Although no direct harm to individuals is reported, the event concerns potential violations of data protection rights and legal obligations, which are fundamental rights. The authority's intervention and investigation indicate a response to a possible or ongoing breach of data protection laws. However, since the article does not report actual harm or confirmed violations but rather a regulatory action and investigation, this event is best classified as Complementary Information, providing context on governance and regulatory responses to AI systems rather than describing a realized AI Incident or a plausible future hazard.
Thumbnail Image

DeepSeek e privacy, risposta insufficiente: il Garante conferma lo stop

2025-01-31
l'Adige
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the AI system DeepSeek and its use as a chatbot AI. The Italian privacy authority's urgent limitation on data processing indicates a direct regulatory response to privacy harms (violation of data protection rights). The discovery of a security flaw exposing sensitive user data constitutes realized harm to personal data privacy. Furthermore, the AI's high rate of false and inaccurate responses, including unprompted political statements, suggests misinformation harm affecting communities. These harms are directly linked to the AI system's development and use. Therefore, this event qualifies as an AI Incident due to realized harms involving privacy violations and misinformation dissemination caused by the AI system.
Thumbnail Image

DeepSeek AI bloqueada pelas autoridades italianas e outros Estados-Membros abrem inquéritos

2025-01-31
euronews
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek AI chatbot) whose use has led to concerns about violations of data protection laws (GDPR), which are legal frameworks protecting fundamental rights related to privacy. The suspension and investigations indicate that the AI system's use has directly or indirectly led to a breach of obligations under applicable law intended to protect fundamental rights. Therefore, this qualifies as an AI Incident due to violations of rights caused by the AI system's use.
Thumbnail Image

South Korea, Ireland Watchdogs To Question DeepSeek On User Data - UrduPoint

2025-01-31
UrduPoint
Why's our monitor labelling this an incident or hazard?
The article describes regulatory authorities requesting information from an AI company regarding its data management practices. While this involves an AI system (DeepSeek's R1 chatbot) and concerns about personal data handling, there is no indication that any harm has occurred or that there is a direct or indirect link to injury, rights violations, or other harms. The event is about oversight and inquiry, not about an incident or a hazard. Therefore, it fits the category of Complementary Information, as it provides context on governance and societal responses to AI-related data privacy concerns.
Thumbnail Image

Las autoridades italianas bloquean DeepSeek

2025-01-31
euronews
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system (an AI chatbot) whose development and use involve processing personal data. The blocking and investigations stem from concerns about violations of data protection laws (GDPR), which are legal frameworks protecting fundamental rights related to privacy. The event involves the use of the AI system leading to potential or actual violations of rights under applicable law. Since the Italian authority has already blocked the system due to these concerns, and investigations are underway, this constitutes an AI Incident involving violations of rights (privacy rights).
Thumbnail Image

Sanciones a DeepSeek: Italia bloquea la app china y Congreso de EE.UU. la prohíbe a sus trabajadores

2025-01-31
EL PAIS
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (DeepSeek) whose use has raised significant concerns about data privacy, legal compliance, and cybersecurity risks. The Italian authority's blocking and investigation, along with the U.S. Congress's prohibition, indicate recognition of plausible risks of harm, including unauthorized data processing and malware distribution. However, no direct or indirect harm has been reported as having occurred yet. Therefore, this event fits the definition of an AI Hazard, as the AI system's use could plausibly lead to incidents involving data privacy violations or security breaches, but no incident has materialized at this time.
Thumbnail Image

Il Garante della Privacy blocca DeepSeek in Italia: tutela dei dati

2025-01-31
HTML.it
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI chatbot that processes personal data, and the Italian authority's decision to block it is based on concerns about potential misuse of sensitive user information, which could lead to violations of privacy rights. Since no actual harm is reported but the risk is credible and immediate, this fits the definition of an AI Hazard. The event is not a Complementary Information piece because it is not an update or response to a past incident but a preventive regulatory action. It is not an AI Incident because no realized harm has occurred yet.
Thumbnail Image

Deepseek用戶個資處理引疑慮 韓國當局將詢問官方

2025-01-31
Yahoo News
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek's AI chatbot) and concerns about its data handling practices, which could potentially lead to violations of personal data protection laws and users' privacy rights. However, the article only describes ongoing or planned regulatory inquiries and investigations without reporting any realized harm or confirmed violations. Therefore, this situation represents a plausible risk of harm (AI Hazard) rather than an actual incident. The involvement of AI is explicit, and the potential harm relates to violations of rights under applicable law, fitting the definition of an AI Hazard.
Thumbnail Image

DeepSeek in Italien gesperrt, Untersuchung der KI-App eingeleitet

2025-01-31
euronews
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek chatbot) whose use is being restricted by a data protection authority due to concerns about compliance with legal frameworks protecting user data. While no direct harm is reported, the investigation and restriction indicate a plausible risk of violation of rights if the AI system's data processing continues unchecked. Therefore, this qualifies as an AI Hazard because the AI system's use could plausibly lead to a breach of applicable law and fundamental rights, but no actual harm has been confirmed yet.
Thumbnail Image

Italiens Datenschutzbehörde sperrt DeepSeek

2025-01-31
Die Presse
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek chatbot) whose use has raised concerns about data leaks and censorship, implicating potential violations of user privacy and rights. The Italian authority's blocking of the app and investigation indicate a credible risk of harm to users' data privacy and rights, but no direct or realized harm is reported yet. The focus is on preventing harm and ensuring compliance with data protection laws. Hence, it fits the definition of an AI Hazard, as the AI system's use could plausibly lead to harm, but no incident has occurred yet.
Thumbnail Image

Il Garante della Privacy blocca il servizio di chatbot DeepSeek in Italia - Corriere Nazionale

2025-01-31
Corriere Nazionale
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek chatbot) whose use in Italy is blocked by the data protection authority due to concerns about personal data processing and compliance with legal frameworks. While no direct harm is reported, the regulatory action indicates a plausible risk of violation of users' rights and data privacy. Therefore, this situation constitutes an AI Hazard, as the AI system's use could plausibly lead to violations of fundamental rights if not properly regulated or compliant.
Thumbnail Image

Garante, "Deepseek minaccia globale, ecco perché l'abbiamo bloccato"

2025-01-31
Agenda Digitale
Why's our monitor labelling this an incident or hazard?
An AI system (DeepSeek chatbot) is explicitly involved, and its use has led to significant concerns about violations of data protection rights (a breach of obligations under applicable law, GDPR). The Garante's decision to block the processing of user data is a response to these realized or imminent harms. The event involves the use of an AI system and its data processing practices that have directly led to a violation of fundamental rights (privacy and data protection). Therefore, this qualifies as an AI Incident. The article does not merely discuss potential future harm or general AI governance but reports a concrete regulatory action due to actual or imminent harm from the AI system's operation.
Thumbnail Image

IA : le régulateur sud-coréen demande des explications à DeepSeek sur les données personnelles

2025-01-31
Le Figaro
Why's our monitor labelling this an incident or hazard?
The article describes a regulatory action concerning the use of personal data by DeepSeek, an AI company. While no direct harm has been reported yet, the investigation indicates potential risks related to privacy violations or misuse of personal data by the AI system. This situation represents a plausible risk of harm stemming from the AI system's use of personal data, fitting the definition of an AI Hazard rather than an Incident, as no realized harm is described.
Thumbnail Image

Il Garante della privacy blocca DeepSeek in Italia

2025-01-31
Benzinga Italia
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek chatbot) whose data processing activities have been restricted by a regulatory authority due to concerns over user data protection. The limitation and investigation imply potential or ongoing violations of privacy rights, which are fundamental rights protected by law. Since the AI system's use has led to regulatory intervention to prevent or address harm to users' data privacy, this qualifies as an AI Incident under the category of violations of human rights or breach of legal obligations protecting fundamental rights.
Thumbnail Image

DeepSeek暴紅掀資安風暴 美禁海軍使用 德國說重話了 | 聯合新聞網

2025-02-01
聯合新聞網
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (DeepSeek's AI chatbot/model) and details multiple countries' concerns and regulatory actions due to potential security, privacy, and intellectual property risks. Although no direct harm is reported as having occurred, the extensive governmental responses and restrictions reflect a credible and plausible risk of harm, including data privacy violations and national security threats. This fits the definition of an AI Hazard, as the AI system's development and use could plausibly lead to an AI Incident. The article does not focus on a realized harm event but on the potential risks and responses, so it is not an AI Incident or Complementary Information. It is not unrelated because the AI system and its risks are central to the report.
Thumbnail Image

DeepSeek引發資安疑慮 荷蘭隱私監管機構啟動調查 | 聯合新聞網

2025-02-01
聯合新聞網
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI startup whose data collection practices are under scrutiny by privacy regulators due to concerns about personal data handling. The involvement of an AI system is explicit, as DeepSeek is described as an AI company. The investigation and warnings indicate potential misuse or non-compliance with data protection laws, which could plausibly lead to violations of fundamental rights (privacy rights). However, the article does not report any realized harm yet, only regulatory concern and investigation. Therefore, this event fits the definition of an AI Hazard, as it plausibly could lead to an AI Incident involving violations of rights if the issues are confirmed or not remediated.
Thumbnail Image

綠委:台灣處威脅第一線 須審慎因應DeepSeek | 聯合新聞網

2025-02-01
聯合新聞網
Why's our monitor labelling this an incident or hazard?
An AI system (DeepSeek) is explicitly mentioned, and its use is restricted due to credible cybersecurity risks that could lead to harm to national security and critical infrastructure. Although no actual harm is reported, the event clearly indicates a plausible risk of significant harm if the AI system were used, justifying a classification as an AI Hazard. The article focuses on the potential threat and preventive policy response rather than an incident of realized harm, so it is not an AI Incident. It is not merely complementary information because the main focus is on the risk and prohibition due to AI-related security concerns, not on updates or responses to past incidents.
Thumbnail Image

DeepSeek引發資安疑慮 荷蘭隱私監管機構啟動調查

2025-02-01
Rti 中央廣播電臺
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek's software) whose data collection and processing practices have raised serious privacy concerns, prompting regulatory scrutiny. Although no direct harm has been reported yet, the investigation and regulatory warnings indicate a plausible risk of violations of privacy rights and data protection laws, which are part of human rights and legal obligations. Since the harm is potential and the event centers on the risk of such harm, this qualifies as an AI Hazard rather than an AI Incident. The event is not merely general AI news or a complementary update but a regulatory response to a credible risk related to an AI system's use.
Thumbnail Image

DeepSeek引資安疑慮 綠委:台灣處第一線須審慎因應

2025-02-01
Rti 中央廣播電臺
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (DeepSeek's AI model) and discusses its use in government and critical infrastructure contexts. The ban is due to credible cybersecurity risks, including potential data leakage and espionage, which could harm national security and critical infrastructure. No actual harm is reported, but the risk is considered significant and plausible. Hence, this is an AI Hazard rather than an AI Incident. The article focuses on the potential for harm and the preventive measures taken, not on a realized incident or harm. It is not merely complementary information because the main focus is on the risk and prohibition due to AI-related security concerns.
Thumbnail Image

Italia bloquea DeepSeek al no recibir información de los datos que recoge la IA china

2025-02-01
Alerta Digital
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek) whose data collection and processing practices are under scrutiny by a data protection authority. The authority's urgent blocking and investigation reflect concerns about potential violations of data protection rights, which fall under violations of applicable law protecting fundamental rights. Although no explicit harm has been reported as having occurred, the regulatory action is based on the plausible risk that the AI system's data practices could lead to harm (e.g., privacy violations). Thus, this qualifies as an AI Hazard because the AI system's use could plausibly lead to an AI Incident involving rights violations if unaddressed. It is not an AI Incident yet because no realized harm is described, nor is it merely complementary information or unrelated news.
Thumbnail Image

deepseek risponde al blocco disposto dal garante per la privacy con una pernacchia - l'azienda...

2025-02-01
dagospia.com
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (DeepSeek's AI chatbot) and its use, specifically regarding data collection and privacy compliance. The refusal to block the service despite regulatory orders and concerns about data security represent a plausible risk of harm to individuals' privacy rights, which is a violation of fundamental rights under applicable law. Since no actual harm or incident is reported yet, but there is a credible risk of privacy violations and data misuse, this qualifies as an AI Hazard rather than an AI Incident. The article does not focus on responses or updates to a past incident, so it is not Complementary Information. It is clearly related to AI and potential harm, so it is not Unrelated.
Thumbnail Image

Il Garante della Privacy blocca Deepseek a tutela dei dati italiani. Mollicone: "Ben fatto" - Secolo d'Italia

2025-01-31
Secolo d'Italia
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Deepseek chatbot) whose use has led to concerns about violations of data protection laws and censorship, which can be considered a violation of rights and harm to communities. The limitation imposed by the privacy authority is a direct response to these harms. Since the AI system's use has already caused these issues and regulatory action is underway, this qualifies as an AI Incident rather than a mere hazard or complementary information.
Thumbnail Image

DeepSeek sfida il Garante ed è ancora disponibile in Italia, Scorza: "La loro risposta stride con la realtà"

2025-01-31
Eurofocus | Adnkronos
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (DeepSeek) whose use and data processing practices have led to regulatory intervention due to concerns about violations of personal data protection rights under GDPR. The AI system's availability in Italy and its data collection practices without clear legal basis constitute a breach of obligations intended to protect fundamental rights, specifically privacy rights. The refusal to cooperate and possible false statements to the authority further exacerbate the issue. These factors meet the criteria for an AI Incident as the AI system's use has directly or indirectly led to violations of human rights and legal obligations.
Thumbnail Image

Dubbi sui dati: il Garante della privacy blocca DeepSeek

2025-01-31
ictbusiness.it
Why's our monitor labelling this an incident or hazard?
An AI system (DeepSeek chatbot based on a Large Language Model) is explicitly involved. The event stems from the use and development of the AI system, specifically regarding data privacy and legal compliance. Although no direct harm such as injury or property damage is reported, the event involves a violation of legal obligations protecting fundamental rights (data privacy under GDPR), which qualifies as harm under the framework. The Gpdp's blocking and investigation indicate that the AI system's operation has led to a breach or risk of breach of data protection rights. Therefore, this is an AI Incident involving violations of rights due to the AI system's use and insufficient compliance with legal frameworks.
Thumbnail Image

Italia da un paso para poner un cerco a DeepSeek, la nueva inteligencia artificial china

2025-01-31
La Opinión de Zamora
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (DeepSeek chatbot) whose use and data processing practices are under scrutiny by a regulatory authority due to concerns about privacy and potential surveillance/propaganda. While no direct harm is reported, the investigation and data use restrictions indicate a credible risk that the AI system could lead to violations of privacy and fundamental rights. This fits the definition of an AI Hazard, as the event plausibly could lead to an AI Incident if the risks materialize. It is not an AI Incident because no realized harm is described, nor is it Complementary Information or Unrelated, as the focus is on the potential risks and regulatory response to the AI system.
Thumbnail Image

Le Pentagone bloque l'accès à DeepSeek après une utilisation par ses employés

2025-01-31
Boursier.com
Why's our monitor labelling this an incident or hazard?
An AI system (DeepSeek chatbot) was used by Pentagon employees, and concerns about data security and privacy risks led to blocking access. The article does not report any realized harm such as data breaches or security incidents but highlights plausible future harm from the use of this AI system, especially given the sensitive context and data storage in China. The event is about mitigating potential risks rather than responding to an incident that has already caused harm. Hence, it fits the definition of an AI Hazard.
Thumbnail Image

Italiens Datenschützer sperren Deepseek

2025-01-31
inside-it.ch
Why's our monitor labelling this an incident or hazard?
The event involves an AI-related company (Deepseek) and concerns the use of user data, which is linked to privacy rights under EU law. However, the article does not describe any realized harm or incident caused by the AI system, nor does it indicate a direct or indirect harm resulting from the AI system's use or malfunction. Instead, it reports on regulatory scrutiny and enforcement actions, which are governance responses to potential or alleged issues. Therefore, this is best classified as Complementary Information, as it provides context on societal and governance responses to AI use and data protection enforcement, without describing a new AI Incident or AI Hazard.
Thumbnail Image

Italia bloquea la aplicación china DeepSeek

2025-01-31
Publico
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek, an AI application) whose development and use are under scrutiny due to unclear data practices. Although no direct harm has been reported, the blocking order and investigations indicate concerns about potential violations of data protection rights and security risks. Since no actual harm has been reported yet, but there is a credible risk of harm related to privacy and legal compliance, this qualifies as an AI Hazard rather than an AI Incident. The event focuses on regulatory actions and potential risks rather than realized harm.
Thumbnail Image

DeepSeek|危害資安為由 台灣宣布公務機關禁用

2025-02-01
std.stheadline.com
Why's our monitor labelling this an incident or hazard?
An AI system (DeepSeek) is explicitly involved, and the event concerns its use in government agencies. The ban is due to plausible risks of information security breaches and data leakage, which could lead to harm to critical infrastructure and national security. Although no direct harm is reported yet, the credible risk of harm from the AI system's use in sensitive contexts qualifies this as an AI Hazard. The event does not describe an actual incident of harm but a preventive measure against potential harm.
Thumbnail Image

中國DeepSeek掀疑慮 世界各國家紛紛採取應對措施(圖) - 新聞 美國 - 看中國新聞網 - 海外華人 歷史秘聞 科技新聞 -

2025-02-01
看中國
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (DeepSeek's AI models and chatbot) and details multiple harms: alleged theft of US AI models (intellectual property violation), data privacy risks leading to regulatory restrictions and investigations in several countries, and national security concerns prompting usage bans and export controls. These harms have materialized or are actively being addressed, indicating direct or indirect harm caused by the AI system's development and use. The involvement of AI is clear, and the harms fall under violations of intellectual property rights and data protection laws, as well as potential broader harms to communities and national security. Hence, the event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Itália derruba DeepSeek e app de IA chinesa é proibida

2025-01-29
Poder360
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (DeepSeek) and concerns about its data handling practices, which could plausibly lead to violations of user privacy rights (a form of harm under the framework). Since no realized harm or incident is reported, but there is a credible risk prompting regulatory action, this fits the definition of an AI Hazard. The event is not merely general AI news or a complementary update but a regulatory intervention due to potential harm risks.
Thumbnail Image

IA e Privacy: Il Garante indaga su DeepSeek per possibili rischi sui dati personali

2025-01-29
01net
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (DeepSeek chatbot) that processes large amounts of user data. The investigation is about potential risks to privacy and data protection, which could plausibly lead to violations of fundamental rights under applicable law (GDPR). Since no realized harm is reported yet, but there is a credible risk of harm, this qualifies as an AI Hazard rather than an AI Incident. The event is not merely complementary information because it focuses on the potential risk and regulatory scrutiny rather than updates or responses to a past incident.
Thumbnail Image

DeepSeek, privacy zero: in Italia è scontro di diritti

2025-01-30
Agenda Digitale
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (DeepSeek's large language model chatbot) whose deployment has led to concerns about violations of data protection laws (GDPR), which protect fundamental rights. The Italian data protection authority's intervention and the company's self-suspension from app stores indicate that the AI system's use has directly or indirectly led to potential or actual harm to users' rights. Given the parallels with the prior OpenAI case that resulted in sanctions, and the current regulatory scrutiny, this qualifies as an AI Incident involving violations of rights. The article does not merely discuss potential future harm but describes ongoing regulatory actions due to realized or imminent harms related to personal data processing by the AI system.
Thumbnail Image

Il Garante della privacy blocca Deepseek in Italia: "Decisione a tutela dei dati degli utenti" - Il Fatto Quotidiano

2025-01-30
Il Fatto Quotidiano
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek chatbot) whose data processing practices have been limited by a regulatory authority to protect user privacy. Although no direct harm is reported, the restriction is a response to potential privacy violations, which relate to violations of rights under applicable law. Since the event concerns a regulatory intervention to prevent or limit harm rather than an actual realized harm, it qualifies as Complementary Information about governance and societal response to AI-related privacy concerns rather than an AI Incident or AI Hazard.
Thumbnail Image

'deepseek' transit gloria mundi - il garante della privacy blocca l'applicazione cinese di ...

2025-01-30
dagospia.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek chatbot) whose data processing practices have raised privacy concerns leading to regulatory intervention. Although no direct harm is reported, the restriction aims to prevent potential violations of data protection rights, which are fundamental rights. Since the event describes a regulatory action to prevent harm rather than an actual harm occurring, it fits the category of Complementary Information, providing context on governance responses to AI-related privacy risks.
Thumbnail Image

Italia bloquea la aplicación china 'DeepSeek' por falta de información sobre cómo entrenan a su IA

2025-01-30
El Español
Why's our monitor labelling this an incident or hazard?
An AI system ('DeepSeek') is explicitly mentioned. The blocking and investigation stem from concerns about the AI system's development and use, specifically regarding data usage and transparency. Although no direct harm is reported yet, the lack of transparency and potential misuse of user data could plausibly lead to violations of data protection rights, which are a form of human rights violation. Since the harm is not yet realized but there is a credible risk, this qualifies as an AI Hazard rather than an AI Incident. The article does not focus on responses to a past incident or broader governance but on the current blocking and investigation, indicating a potential future harm scenario.
Thumbnail Image

Garante della Privacy blocca DeepSeek: tutelare i dati italiani

2025-01-30
tgcom24.mediaset.it
Why's our monitor labelling this an incident or hazard?
An AI system (DeepSeek chatbot) is explicitly involved, and the event concerns its use and data processing practices. The intervention by the privacy authority indicates a regulatory response to potential or ongoing violations of data protection laws, which relate to fundamental rights. Although the article does not explicitly state that harm has already occurred, the urgent limitation suggests a credible risk of violation of users' privacy rights, which are fundamental rights. Therefore, this event is best classified as an AI Hazard, as the AI system's use could plausibly lead to a breach of legal obligations protecting fundamental rights, but no confirmed harm is reported yet.
Thumbnail Image

Italia bloquea "de forma urgente y efecto inmediato" a DeepSeek

2025-01-30
Vozpópuli
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek) whose use is being restricted by authorities due to concerns about data privacy and compliance with legal frameworks. Although no direct harm (such as injury, rights violations, or other damages) has been reported, the investigation and urgent blocking indicate a credible risk that the AI system's operation could lead to violations of data protection rights or other harms. The AI system's development and use are under scrutiny, and the authorities' actions aim to prevent potential harm. Hence, this is an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Svolta Deepseek: il Garante della Privacy blocca l'app "per tutelare i dati degli italiani"

2025-01-30
ilGiornale.it
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the Deepseek chatbot) whose use has triggered regulatory action due to insufficient data protection safeguards, implicating violations of fundamental rights under privacy law. The blocking of the app and the opening of an investigation by the privacy authority indicate that the AI system's use has directly or indirectly led to a breach or risk of breach of legal obligations protecting fundamental rights. This fits the definition of an AI Incident, as the AI system's use has caused or is causing harm in the form of rights violations. The event is not merely a potential risk (hazard) nor a complementary update; it is a concrete regulatory response to an ongoing issue.
Thumbnail Image

Italiens Datenschutzbehörde sperrt DeepSeek

2025-01-30
news.ORF.at
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (DeepSeek's AI app) and concerns about data privacy violations, which constitute a breach of legal obligations protecting fundamental rights. The blocking of the app and the investigation by the authority indicate that the AI system's use has led to or is strongly suspected of leading to violations of applicable law. Therefore, this qualifies as an AI Incident due to the realized or strongly suspected breach of data protection rights linked to the AI system's use.
Thumbnail Image

DeepSeek, il Garante chiede informazioni: "Alto rischio per i dati di milioni di persone in Italia"

2025-01-28
Il Mattino
Why's our monitor labelling this an incident or hazard?
The article describes regulatory scrutiny over an AI system's data practices due to potential high risks to personal data, but does not report any realized harm or incident. The event concerns possible future risks and compliance verification, fitting the definition of Complementary Information as it provides context and updates on governance and oversight related to AI systems, rather than reporting an AI Incident or Hazard.
Thumbnail Image

DeepSeek e le norme sulla privacy. La piattaforma cinese le rispetta? Cosa si può fare per proteggersi - Il Fatto Quotidiano

2025-01-28
Il Fatto Quotidiano
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek, an AI chatbot platform) and concerns its use and data processing practices. The Italian privacy authority's investigation and request for information indicate a potential risk of harm to personal data privacy, which is a violation of fundamental rights under applicable law. However, the article does not report any confirmed or realized harm yet, only a potential risk and regulatory scrutiny. Therefore, this qualifies as an AI Hazard because the development and use of the AI system could plausibly lead to violations of privacy rights and related harms, but no incident has been confirmed or documented at this stage.
Thumbnail Image

Garante privacy si muove su DeepSeek: "A rischio i dati di milioni di Italiani"

2025-01-28
RaiNews
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (DeepSeek) and concerns about personal data handling and privacy risks, which could plausibly lead to harm if mismanaged. However, no actual harm or incident has been reported so far; the authority is requesting information to assess potential risks. Therefore, this qualifies as an AI Hazard, reflecting a credible risk of harm to personal data privacy but no confirmed incident yet.
Thumbnail Image

Il Garante della privacy mette in guardia gli italiani rispetto DeepSeek

2025-01-29
IlSoftware.it
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek, an AI chatbot) and concerns about its data handling practices potentially violating privacy laws, which relates to human rights and legal obligations. However, the article does not report any realized harm or confirmed violation, only a regulatory inquiry and requests for information. Therefore, this is a plausible risk scenario where harm could occur if issues are confirmed, but no incident has yet materialized. Hence, it qualifies as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Il Garante della privacy su DeepSeek: "Possibile rischio per i dati di milioni di italiani

2025-01-29
Benzinga Italia
Why's our monitor labelling this an incident or hazard?
The article describes a regulatory authority's request for information due to potential high risk to personal data from an AI system's data collection and training practices. There is no indication that harm has already occurred, but the situation plausibly could lead to violations of data protection rights, which are a form of human rights violation. Therefore, this qualifies as an AI Hazard, as the AI system's use could plausibly lead to an AI Incident involving data privacy harm, but no incident has been confirmed or reported yet.
Thumbnail Image

l'app di 'deepseek l'intelligenza artificiale cinese low cost, è sparita dagli store di

2025-01-29
dagospia.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek generative AI app) and concerns about its use potentially causing harm to personal data privacy of millions of people, which is a violation of fundamental rights under applicable law. However, the article does not report any realized harm or incident; rather, it reports regulatory scrutiny and app removal as a precautionary measure. Therefore, this is a plausible risk scenario where the AI system's use could lead to harm but no harm has yet been confirmed. This fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

王定宇:台灣處威脅第一線 更該審慎因應DeepSeek | 政治 | 三立新聞網 SETN.COM

2025-02-01
setn.com
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI model developed by a Chinese company, and its use is restricted by Taiwan's government due to cybersecurity and national security concerns. The article highlights that the AI system could potentially transmit sensitive data or contain malicious code, posing risks to critical infrastructure and national security. No actual harm has been reported yet, but the credible risk of harm has led to a government ban. This fits the definition of an AI Hazard, as the AI system's use could plausibly lead to an AI Incident if not controlled. The event is not an AI Incident because no realized harm is described, nor is it merely complementary information or unrelated news.
Thumbnail Image

荷蘭監管機構稱將對DeepSeek的數據收集做法展開調查

2025-02-01
ار.اف.ای - RFI
Why's our monitor labelling this an incident or hazard?
The article involves an AI company (DeepSeek) whose data collection practices are under investigation by multiple European data protection authorities due to privacy concerns. The investigation relates to the development and use of AI systems that process personal data. However, there is no indication that any actual harm (such as data breaches, rights violations, or other damages) has yet occurred. The event is about potential risks and regulatory scrutiny, which could plausibly lead to harm if issues are confirmed, but no realized harm is reported. Therefore, this qualifies as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

DeepSeek遭多國審查 台灣禁公務機關使用 | deepseek | AI | 人工智能 | 大紀元

2025-01-31
The Epoch Times
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system whose use is being restricted or banned by various governments due to concerns about data privacy, information security, and national security risks. These concerns indicate plausible risks of harm, such as data breaches or violations of privacy rights, but the article does not describe any actual harm or incidents caused by the AI system. The focus is on precautionary measures, regulatory scrutiny, and policy responses to potential risks. Therefore, this event fits the definition of an AI Hazard, as the AI system's use could plausibly lead to harm, but no direct or indirect harm has yet been reported.
Thumbnail Image

荷蘭數據保護機構將對DeepSeek展開調查 | deepseek | 愛爾蘭 | 德國 | 大紀元

2025-01-31
The Epoch Times
Why's our monitor labelling this an incident or hazard?
DeepSeek is explicitly described as a generative AI system whose data collection and processing practices are under investigation for potential violations of privacy laws and risks related to Chinese government access to data. The event involves the use and development of an AI system with concerns about compliance and data protection, which could plausibly lead to violations of fundamental rights (privacy). No actual harm or breach has been confirmed yet, but the regulatory warnings and actions indicate a credible risk. Hence, this is an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

掀資安疑慮!荷蘭資料保護局:將對DeepSeek展開調查 - 國際 - 自由時報電子報

2025-02-01
Liberty Times Net
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI company with AI models whose data collection practices are under scrutiny for potential violations of privacy and data protection laws (GDPR). The Dutch authority's investigation and warnings to users highlight concerns about the AI system's use and data handling that could plausibly lead to harm in terms of violations of rights and legal obligations. Since no actual harm or incident is reported yet, but there is a credible risk and regulatory action underway, this event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

DeepSeek 用戶個資處理引疑慮,韓國當局詢問官方

2025-02-01
TechNews 科技新報
Why's our monitor labelling this an incident or hazard?
The article describes official inquiries by data protection authorities into DeepSeek's AI chatbot's data handling practices. While no actual harm or violation has been confirmed or reported, the situation presents a credible risk that the AI system's use or development could lead to violations of personal data protection laws, which fall under violations of rights. Therefore, this qualifies as an AI Hazard, as the AI system's involvement could plausibly lead to an AI Incident if data privacy is compromised. There is no indication of realized harm yet, so it is not an AI Incident. It is more than just complementary information because the inquiries themselves indicate a potential risk rather than a mere update or response to a past incident.
Thumbnail Image

DeepSeek掀資安疑慮 美國務院:將限制使用有風險工具 | 國際 | 中央社 CNA

2025-02-01
中央社 CNA
Why's our monitor labelling this an incident or hazard?
The event involves the use and potential misuse of an AI system (DeepSeek's AI model) that raises significant data privacy and security concerns. The restrictions and warnings by government agencies indicate a recognition of plausible risks that the AI system could lead to harm, specifically violations of privacy and potential breaches of data security. Although no direct harm is reported as having occurred yet, the credible risk of harm to data privacy and security justifies classification as an AI Hazard. The article focuses on the potential risks and preventive measures rather than reporting an actual incident of harm, so it does not qualify as an AI Incident. It is more than general AI news, as it concerns specific risks and governmental responses, so it is not Complementary Information or Unrelated.
Thumbnail Image

綠委:台灣處威脅第一線 須審慎因應DeepSeek | 政治 | 中央社 CNA

2025-02-01
中央社 CNA
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system whose use is restricted by Taiwan's government due to credible cybersecurity risks. The article does not report any realized harm but focuses on the potential for harm, such as data leakage or espionage, which could impact national security and critical infrastructure. This fits the definition of an AI Hazard, as the development and use of DeepSeek could plausibly lead to an AI Incident involving harm to critical infrastructure or national security. The article's main focus is on the potential threat and the preventive measures taken, not on an actual incident or realized harm, so it is not an AI Incident. It is also not merely complementary information or unrelated, as it directly concerns a credible AI-related risk and governmental response.
Thumbnail Image

DeepSeek引發資安疑慮 荷蘭隱私監管機構啟動調查 | 國際 | 中央社 CNA

2025-02-01
中央社 CNA
Why's our monitor labelling this an incident or hazard?
DeepSeek is described as an AI company, and the concerns revolve around its collection and use of personal data, which implicates privacy rights and data protection laws. The investigation by the Dutch authority and coordinated actions by other EU regulators indicate potential violations of legal obligations protecting fundamental rights. However, the article does not report any realized harm or incident caused by DeepSeek's AI system, only that an investigation and regulatory scrutiny are underway due to concerns. Therefore, this event is best classified as Complementary Information, as it provides updates on governance responses and regulatory actions related to AI privacy concerns, without describing a specific AI Incident or AI Hazard.
Thumbnail Image

涉資安 數發部:「公務機關禁DeepSeek」 專家憂機密資料回傳伺服器-台視新聞網

2025-02-01
台視新聞網
Why's our monitor labelling this an incident or hazard?
DeepSeek is a generative AI system that processes input data by sending it to its servers and returning results. The article explicitly states that using DeepSeek in public agencies risks confidential data being transmitted and potentially leaked, which constitutes harm to property and possibly national security (harm to communities). The involvement of the AI system's use directly leads to a risk of harm, and the restriction by authorities is a response to this risk. Since the article describes realized concerns and restrictions due to actual data transmission practices, this qualifies as an AI Incident rather than a mere hazard or complementary information.
Thumbnail Image

DeepSeek AI掀資安風暴!歐盟多國調查 台灣數發部發出警示 | 國際 | Newtalk新聞

2025-02-01
Newtalk新聞
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI company whose products involve data collection and processing, implicating AI system involvement. The investigations and warnings by multiple regulatory bodies highlight concerns about the AI system's data handling potentially leading to violations of privacy rights and data protection laws, which constitute harm under the framework. Since the article does not report actual realized harm but focuses on ongoing investigations and precautionary warnings, this situation represents a plausible risk of harm rather than a confirmed incident. Therefore, it qualifies as an AI Hazard due to the credible potential for privacy and data security harms stemming from the AI system's use.
Thumbnail Image

DeepSeek數據隱私遭疑 南韓出手調查 - 自由財經

2025-01-31
ec.ltn.com.tw
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system (a large language model-based chatbot). The investigations by multiple national data protection authorities focus on how DeepSeek manages personal data and the transparency of its data sourcing and processing. These concerns relate to potential violations of data protection laws and user privacy rights, which fall under violations of human rights or legal obligations protecting fundamental rights. Although no explicit harm has been reported yet, the ongoing investigations imply that misuse or non-compliance could have led or could lead to harm. Since the article mainly describes regulatory inquiries and concerns about possible data privacy violations without confirmed harm, this event is best classified as Complementary Information, providing updates on governance responses and ongoing scrutiny of an AI system's compliance with data protection laws.
Thumbnail Image

DeepSeek 被禁意大利 App Store 及 Google Play 上架 品牌需回應數據收集用途與目的

2025-01-31
ezone.hk 即時科技生活
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system (an AI model-based app) whose use involves personal data collection. The ban and investigation by the Italian regulator is a governance response to potential violations of data protection laws (GDPR). There is no indication that actual harm has occurred yet, but the regulator is seeking clarifications and imposing restrictions to prevent possible harm, especially to minors. Therefore, this event is best classified as Complementary Information, as it provides important context on societal and governance responses to AI-related privacy concerns, rather than describing a realized AI Incident or a plausible AI Hazard.
Thumbnail Image

個資恐「被送中」!義大利、美軍、美國會封殺DeepSeek - 自由財經

2025-01-31
ec.ltn.com.tw
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (DeepSeek AI large language model) whose use has led to significant concerns about personal data being sent to China without sufficient transparency or legal safeguards. The involvement of official bodies banning or restricting the AI system's use due to these concerns indicates that harm related to violations of data privacy and legal obligations has occurred or is ongoing. This fits the definition of an AI Incident as the AI system's use has directly or indirectly led to a breach of fundamental rights (data privacy). The event is not merely a potential risk but involves realized harm and regulatory responses, distinguishing it from an AI Hazard or Complementary Information.
Thumbnail Image

義大利祭緊急封鎖令 限制DeepSeek處置國民個資 | 國際 | 中央社 CNA

2025-01-31
中央社 CNA
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (DeepSeek) that processes personal data and human conversations, indicating AI involvement. The Italian authority's emergency restriction is due to concerns about data privacy risks and insufficient compliance with legal frameworks, aiming to prevent potential violations of fundamental rights. Since no actual harm or incident is reported yet, but a credible risk exists, this fits the definition of an AI Hazard rather than an AI Incident. The event is not merely complementary information because it reports a regulatory action directly linked to potential harm from AI use.
Thumbnail Image

意大利數據監管機構限制DeepSeek並展開調查

2025-01-31
美國之音
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (DeepSeek's chatbot) and concerns the handling of personal data, which relates to violations of fundamental rights under applicable law (data protection and privacy rights). Although no direct harm is explicitly reported yet, the regulatory action and investigation indicate concerns about potential or ongoing violations. Since the article describes an active regulatory restriction and investigation due to inadequate compliance, this constitutes an AI Incident involving violations of rights (c).
Thumbnail Image

DeepSeek 遭質疑資料傳到中國,義大利憂個資外洩要求說明

2025-01-30
TechNews 科技新報
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (DeepSeek AI chatbot) whose data handling practices are being investigated by the Italian data protection authority for potential GDPR violations and risks of personal data exposure. Although the AI system's use is implicated, the event centers on regulatory inquiry and potential future harm rather than an actual incident of harm occurring. Therefore, it fits the definition of an AI Hazard, as the AI system's development and use could plausibly lead to violations of privacy rights and data protection laws, but no confirmed harm has yet materialized.
Thumbnail Image

義大利蘋果和谷歌商店下架DeepSeek 監管機構將調查 | 國際 | 中央社 CNA

2025-01-30
中央社 CNA
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI application, and the regulatory authorities are investigating its data usage practices for potential violations of GDPR, which protects fundamental rights. The removal of the app from stores suggests a serious concern about possible harm to users' privacy rights. While no explicit harm has been confirmed, the situation presents a credible risk of violation of rights due to AI system use, fitting the definition of an AI Hazard rather than an AI Incident, as harm is plausible but not yet realized.
Thumbnail Image

全球警惕!DeepSeek在義大利被屏蔽 愛爾蘭要求提供數據 德國也..... | 國際 | Newtalk新聞

2025-01-30
Newtalk新聞
Why's our monitor labelling this an incident or hazard?
The article describes regulatory actions and investigations concerning DeepSeek, an AI model, focusing on potential violations of data protection laws (GDPR), risks of bias dissemination, and election interference. These concerns represent plausible future harms that could arise from the AI system's use or misuse. Since no actual harm has been reported yet but credible risks are identified and regulatory measures are underway, this event fits the definition of an AI Hazard rather than an AI Incident. The involvement of AI is explicit, and the potential harms relate to privacy rights and democratic processes, which are significant societal harms.
Thumbnail Image

DeepSeek 遭質疑資料傳到中國,義大利憂個資外洩要求說明

2025-01-30
TechNews 科技新報
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (DeepSeek AI chatbot) whose development and use include collecting and processing personal data. The regulatory investigation by the Italian data protection authority is due to concerns about potential violations of GDPR, including unauthorized cross-border data transfers and unclear data handling practices. Although no realized harm (such as a data breach or privacy violation) is reported, the situation plausibly could lead to harm to individuals' privacy rights if the issues are not resolved. Therefore, this qualifies as an AI Hazard, as the AI system's use could plausibly lead to an AI Incident involving violations of data protection laws and privacy harm. The article mainly focuses on the potential risk and regulatory response rather than an actual incident of harm.
Thumbnail Image

DeepSeek捲私隱爭議 意暫禁上架 - 20250201 - 國際

2025-01-31
明報新聞網 - 每日明報 daily news
Why's our monitor labelling this an incident or hazard?
DeepSeek's AI model is explicitly identified as the AI system involved. The investigations and bans by data protection authorities in Italy, Germany, Ireland, South Korea, France, and Taiwan are responses to the AI system's use and its poor privacy protection measures. These regulatory actions indicate that the AI system's deployment has already caused or is causing violations of data privacy rights, which are a form of human rights violations under applicable law. The temporary ban on the AI model's availability in Italy and the ongoing investigations demonstrate that harm related to privacy and data protection has materialized or is occurring. Hence, the event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

意大利將DeepSeek從應用商店下架

2025-01-31
香港經濟日報HKET
Why's our monitor labelling this an incident or hazard?
The event describes governmental and regulatory actions taken against an AI system (DeepSeek) due to concerns about personal data usage and privacy. While these concerns relate to fundamental rights (privacy), the article does not report any realized harm or violation but rather precautionary and investigative measures. The removal of the app from stores and calls for caution are governance responses to potential risks. Therefore, this event is best classified as Complementary Information, as it provides updates on societal and governance responses to AI-related privacy concerns without reporting an actual AI Incident or AI Hazard.
Thumbnail Image

DeepSeek爆紅後掀疑慮 世界主要國家應對措施一覽 | 國際 | 中央社 CNA

2025-01-31
中央社 CNA
Why's our monitor labelling this an incident or hazard?
The article describes the use and development of AI systems by DeepSeek and the resulting international governmental concerns about data privacy, security, and potential misuse. However, it does not document any actual incidents of harm (such as data breaches, health or safety injuries, or rights violations) that have occurred due to DeepSeek's AI systems. Instead, it focuses on precautionary measures, investigations, and warnings by various countries, indicating a credible risk of future harm. Therefore, the event fits the definition of an AI Hazard, as the AI system's use and development could plausibly lead to incidents involving data privacy violations or security breaches, but no direct harm has yet been reported.
Thumbnail Image

義大利祭緊急封鎖令,限制 DeepSeek 處置國民個資

2025-01-31
TechNews 科技新報
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system processing personal data, and the Italian authority's emergency restriction is a response to potential violations of fundamental rights (data privacy) due to insufficient compliance with legal frameworks. The event involves the use of an AI system and the plausible risk of harm to individuals' rights, justifying classification as an AI Hazard. There is no report of actual harm occurring yet, so it is not an AI Incident. The event is not merely complementary information since it reports a regulatory action directly linked to potential harm from AI use.
Thumbnail Image

DeepSeek涉資安疑慮 數發部:公務機關不得使用

2025-01-31
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek AI model) whose use is considered to pose a credible risk of harm to national cybersecurity and information security. Although no actual harm is reported as having occurred, the official government warning and restrictions indicate a plausible risk that the AI system's use could lead to an AI Incident involving harm to critical infrastructure or national security. Therefore, this qualifies as an AI Hazard rather than an Incident, as the harm is potential and preventive measures are being taken.
Thumbnail Image

DeepSeek恐有資安疑慮 數發部下令「公部門禁止使用」

2025-01-31
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions DeepSeek as an AI product with potential cybersecurity risks that could lead to harm by leaking sensitive government and personal data. Although no actual harm is reported yet, the directive to ban its use in public agencies is a preventive measure to avoid plausible future harm to national information security. This fits the definition of an AI Hazard, as the AI system's use could plausibly lead to an incident involving harm to critical infrastructure or national security if not controlled.
Thumbnail Image

韓要求DeepSeek交代如何處理個資 台禁公務機關使用

2025-02-01
Yahoo News
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system as it is described as an AI model/service. The event involves concerns about the use of this AI system and its handling of personal data, which implicates potential violations of data protection laws and privacy rights. However, the article does not report any realized harm or incident caused by DeepSeek's AI system, only regulatory scrutiny and preventive measures (e.g., bans and investigations). Therefore, this is a plausible risk scenario where the AI system's use could lead to harm (e.g., data breaches, privacy violations) but no direct harm has yet been reported. Hence, this qualifies as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

防範國家資安風險 數發部:公務機關不得使用DeepSeek AI服務

2025-01-31
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek) whose use is restricted by government authorities to prevent potential cybersecurity risks and data leaks that could harm national information security. Since no actual harm or incident has occurred but there is a credible risk of future harm, this qualifies as an AI Hazard. The article does not report any realized harm or incident, nor does it primarily discuss responses to past incidents, so it is not an AI Incident or Complementary Information. It is not unrelated because it clearly involves an AI system and its potential risks.
Thumbnail Image

防資安風險 數發部:公務機關不得使用DeepSeek

2025-01-31
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek) and concerns its use by government agencies. However, no actual harm has been reported; rather, the government is imposing restrictions to prevent plausible future harm related to cybersecurity risks. Therefore, this constitutes an AI Hazard, as the use or malfunction of the AI system could plausibly lead to harm (national information security breaches) if not controlled.
Thumbnail Image

恐爆資安疑慮 數發部:公部門禁用DeepSeek

2025-01-31
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek) whose use is restricted by government authorities due to credible cybersecurity risks, including potential data leakage and harm to national information security. Although no direct harm has been reported yet, the warnings and restrictions are based on plausible risks that the AI system's use could lead to significant harm to critical infrastructure and national security. Therefore, this event qualifies as an AI Hazard because it concerns the plausible future harm from the use of an AI system, rather than an incident where harm has already occurred. The article focuses on the risk and preventive measures rather than a realized harm or incident.
Thumbnail Image

台灣數位發展部禁止公務機關使用DeepSeek

2025-01-31
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek AI product) whose use is restricted by a government agency due to cybersecurity risks and potential data leakage, which could plausibly lead to harm to national information security and critical infrastructure. Since no actual harm has been reported but credible risks are identified and actions taken to mitigate them, this qualifies as an AI Hazard rather than an AI Incident. The focus is on preventing plausible future harm rather than reporting realized harm.
Thumbnail Image

DeepSeek恐危害國家資通安全 數發部宣布:公務機關不得使用

2025-01-31
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
The article explicitly identifies DeepSeek as an AI language model product and highlights concerns about its use leading to potential information security breaches and harm to national information and communication security. The government's restriction and warnings indicate a credible risk that the AI system's use could plausibly lead to harm, fitting the definition of an AI Hazard. Since no actual harm or incident is reported, and the focus is on preventing potential harm, this event is best classified as an AI Hazard.
Thumbnail Image

DeepSeek涉資安疑慮 數發部:公務機關不得使用

2025-02-01
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (DeepSeek AI model) and concerns about its use leading to potential cybersecurity risks, including data leakage and cross-border transmission of sensitive information. The advisory to restrict use in public agencies and critical infrastructure is a preventive measure to avoid harm to national information security, which qualifies as harm to critical infrastructure management and operation. Since no actual harm has been reported but plausible future harm is credible and the advisory is issued to prevent such harm, this event fits the definition of an AI Hazard rather than an AI Incident. It is not merely complementary information because the main focus is on the potential risk and restriction due to the AI system's characteristics, not on responses to past incidents or general AI ecosystem updates.
Thumbnail Image

國安考量!DeepSeek涉資安疑慮 數位部:公務機關禁用

2025-02-01
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI model from DeepSeek and the associated cybersecurity risks, including potential data leakage and cross-border transmission, which could harm national information security. The government's action to restrict usage is a response to these plausible risks. Since no actual harm has been reported but there is a credible risk of harm, this event qualifies as an AI Hazard rather than an AI Incident. It is not merely complementary information because the main focus is on the risk and preventive measures, not on updates or responses to past incidents.
Thumbnail Image

五角大廈禁止後 數百美國政府關聯企業也禁用DeepSeek

2025-02-01
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system, DeepSeek's generative AI technology, whose use has led to significant concerns about data privacy and security, with actual bans implemented by government and related enterprises. The AI system's data handling practices expose users to potential unauthorized data access by the Chinese government, constituting a violation of rights. Additionally, cybersecurity experts warn that the AI system's vulnerabilities are already being exploited or could be exploited to facilitate cyberattacks and fraud, indicating realized harm. These factors meet the criteria for an AI Incident, as the AI system's use has directly or indirectly led to harms including privacy violations and increased cybercrime risks.
Thumbnail Image

公部門禁用DeepSeek 綠委:台灣深受中國威脅第一線速度應更快

2025-02-01
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions DeepSeek as an AI program and discusses its use and associated cybersecurity risks. The government's ban is a response to potential threats posed by the AI system, including espionage and information security breaches, which could lead to harm to critical infrastructure or national security. Since no actual harm has been reported yet, but the risk is credible and the ban is a direct response to this risk, the event fits the definition of an AI Hazard. It is not an AI Incident because harm has not yet occurred, nor is it merely complementary information or unrelated news.
Thumbnail Image

台灣公部門禁用DeepSeek 他:站在威脅第一線要審慎因應

2025-02-01
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (DeepSeek) whose deployment in public sector is banned due to credible cybersecurity risks that could lead to harm to critical infrastructure and information security. Although no specific harm is reported as having occurred, the ban is a response to plausible threats posed by the AI system's use, fitting the definition of an AI Hazard. The article does not describe an actual incident of harm but focuses on the potential risks and preventive measures taken by the government, thus it is best classified as an AI Hazard.
Thumbnail Image

數發部提禁用DeepSeek,遭質疑大學、研究中心也限制 李彥秀:有必要釐清範圍

2025-02-01
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (DeepSeek) and concerns about its use leading to potential national cybersecurity risks (information leakage, cross-border data transmission). The government has imposed a ban on its use in public institutions to prevent these risks. Since no actual harm has occurred yet but there is a credible risk of harm, this fits the definition of an AI Hazard. The legislator's call for clarification is a governance response but does not change the classification. There is no indication of realized harm or incident, so it is not an AI Incident. It is not merely complementary information or unrelated news, as the ban is a direct response to a plausible AI-related risk.
Thumbnail Image

DeepSeek激危機意識!Palantir執行長:美國須「舉國努力」發展AI | 國際要聞 | 全球 | NOWnews今日新聞

2025-01-31
NOWnews 今日新聞
Why's our monitor labelling this an incident or hazard?
The article focuses on geopolitical and competitive aspects of AI development between the US and China, with no mention of any AI system causing injury, rights violations, infrastructure disruption, or other harms. It does not describe any incident or hazard involving AI systems but rather provides commentary and strategic warnings, which fits the category of Complementary Information.
Thumbnail Image

數發部警示公部門禁用Deepseek 藍委:有責任義務釐清管制範圍

2025-02-01
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Deepseek) and concerns its use in public institutions due to cybersecurity risks. However, the article does not report any realized harm or incident caused by the AI system, nor does it describe a specific event where harm occurred. Instead, it discusses a governmental warning and policy restriction aimed at preventing potential risks. This constitutes a plausible risk scenario where the AI system's use could lead to harm, but no harm has yet materialized. Therefore, this is best classified as an AI Hazard, reflecting the potential for harm and the preventive measures being considered.
Thumbnail Image

數發部喊禁DeepSeek 李彥秀籲釐清管制範圍:別為禁而禁

2025-02-01
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
The article centers on a governmental advisory restricting the use of an AI product due to cybersecurity concerns, which implies a plausible risk of harm but does not describe any actual harm or incident caused by the AI system. The legislator's call for clarification highlights governance and policy responses to potential AI risks. Therefore, this event fits the definition of an AI Hazard, as it concerns plausible future harm from the AI system's use and regulatory measures to mitigate that risk, rather than an AI Incident or Complementary Information.
Thumbnail Image

公部門禁DeepSeek 綠委:應重視資訊安全

2025-02-01
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek) and concerns about its cybersecurity risks, which could plausibly lead to harm if exploited (e.g., espionage, data breaches). The government's ban is a preventive action based on these plausible risks. Since no actual harm has occurred or been reported, and the article centers on the potential threat and policy response, this qualifies as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

數發部:公務機關不得使用DeepSeek (圖)

2025-02-01
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek AI) and concerns about its use leading to potential data leakage and national cybersecurity risks. However, the article does not report any actual harm or incident caused by the AI system but rather a warning and restriction to prevent such harm. Therefore, it fits the definition of an AI Hazard, as it plausibly could lead to harm but no harm has yet occurred.
Thumbnail Image

綠委:台灣處威脅第一線 須審慎因應DeepSeek

2025-02-01
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek) and concerns about its use leading to potential cybersecurity and national security risks. However, no actual harm or incident has been reported; the article centers on preventive actions and warnings to avoid possible future harm. Therefore, this qualifies as an AI Hazard, as the AI system's use could plausibly lead to harm, but no incident has yet occurred.
Thumbnail Image

公部門禁用DeepSeek 綠委:台灣處威脅第一線要審慎因應 | 政治快訊 | 要聞 | NOWnews今日新聞

2025-02-01
NOWnews 今日新聞
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (DeepSeek) whose use is prohibited by the government due to cybersecurity and national security concerns. Although no actual harm has been reported, the ban is based on the plausible risk that the AI system could lead to significant harm such as espionage or data breaches. Therefore, this event fits the definition of an AI Hazard, as it concerns a credible potential for harm from the AI system's use, but no incident has yet occurred.
Thumbnail Image

公部門禁DeepSeek 藍委:影響學術恐因噎廢食

2025-02-01
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system whose use is banned by the government in public sectors due to cybersecurity and data privacy concerns, indicating a plausible risk of harm to national security and personal data. The article discusses the potential for harm rather than an actual incident or realized harm caused by the AI system. The political debate about the ban's impact on academia does not change the nature of the event as a precautionary measure against potential harm. Hence, this is an AI Hazard, not an AI Incident or Complementary Information.
Thumbnail Image

防範國家資安風險 數發部禁止公務機關使用 Deepseek AI

2025-01-31
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (DeepSeek AI) and the government's prohibition on its use by public agencies due to concerns about national cybersecurity risks, including data leakage and cross-border information transmission. No actual harm or incident is reported; the directive is preventive to avoid potential harm. This fits the definition of an AI Hazard, where the use or development of an AI system could plausibly lead to harm (here, harm to national information security). It is not an AI Incident because no harm has occurred, nor is it complementary information since the focus is on the risk and prohibition rather than updates or responses to past events.
Thumbnail Image

DeepSeek危資安 數百企業、政府禁用 - 自由財經

2025-01-31
自由時報電子報
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI model developed by DeepSeek and concerns about its potential to leak user data to the Chinese government, which could violate privacy rights and national security. Multiple governments and companies are restricting access to the AI system to prevent such harms. No actual harm is reported as having occurred yet, but the credible risk of harm is recognized and acted upon. This fits the definition of an AI Hazard, where the AI system's use could plausibly lead to an AI Incident involving violations of rights and harm to national security. The event is not an AI Incident because no realized harm is described, nor is it Complementary Information or Unrelated.
Thumbnail Image

DeepSeek資安疑慮擴大 數百家公司、各國政府機構限用 - 自由財經

2025-01-31
自由時報電子報
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (DeepSeek's AI chatbot and tools) whose use is being restricted globally due to concerns about data privacy, potential data sharing with the Chinese government, and national security risks. These concerns relate to possible violations of privacy rights and breaches of legal obligations protecting fundamental rights, which are recognized harms under the AI Incident definition. However, the article does not report any realized harm or incident but rather the plausible risk and preventive actions taken by companies and governments. The warnings about potential misuse by hackers further support the credible risk of harm. Thus, the event fits the definition of an AI Hazard, as the AI system's development and use could plausibly lead to an AI Incident, but no direct harm has yet occurred.
Thumbnail Image

公部門禁用DeepSeek 數發部:屬「危害國家資安產品」 - 自由財經

2025-01-31
自由時報電子報
Why's our monitor labelling this an incident or hazard?
An AI system (DeepSeek AI large model) is explicitly mentioned, and its use is linked to potential harm to national information security through data leakage and cross-border transmission of sensitive information. The government has taken regulatory action to restrict its use in public agencies to prevent this harm. Although no specific incident of data breach is reported, the described risk and regulatory response indicate a credible and plausible threat of harm to critical infrastructure and national security. Therefore, this qualifies as an AI Hazard, as the AI system's use could plausibly lead to an AI Incident involving harm to national cybersecurity.
Thumbnail Image

防AI被偷師 美擬立法與中脫鉤 - 國際 - 自由時報電子報

2025-02-01
Liberty Times Net
Why's our monitor labelling this an incident or hazard?
The article does not report any realized harm caused by the AI system DeepSeek or related AI technologies. Instead, it discusses legislative proposals and government actions intended to prevent potential harms from AI technology transfer and use, reflecting concerns about plausible future risks to national security and data privacy. Therefore, the event fits the definition of an AI Hazard, as it involves credible potential for harm due to AI system development and use, but no direct or indirect harm has yet occurred. It is not Complementary Information because the main focus is on the potential risk and legislative response, not on updates to a past incident. It is not an AI Incident because no harm has materialized.
Thumbnail Image

擔心個資「被送中」義大利要DeepSeek說明資料是否存放中國 - 國際 - 自由時報電子報

2025-01-29
Liberty Times Net
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (DeepSeek's generative AI) and concerns about personal data privacy and security, which are potential risks. The Italian authority's request for information is a governance and regulatory response to these concerns. There is no indication that harm has occurred yet, only a plausible risk of harm related to data privacy and potential misuse or censorship. Therefore, this event fits the definition of Complementary Information as it provides context on societal and governance responses to AI-related privacy concerns, rather than reporting an AI Incident or AI Hazard.
Thumbnail Image

明文禁止 數發部:公部門不准用Deepseek

2025-01-31
Yahoo!奇摩股市
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (DeepSeek AI) and concerns about its use in public sector organizations. The directive is issued to prevent potential cybersecurity incidents and data leaks that could harm national security, which fits the definition of an AI Hazard—an event where AI system use could plausibly lead to harm. Since no actual harm has occurred yet, and the focus is on preventing risk, this is classified as an AI Hazard rather than an AI Incident. It is not merely complementary information because the main focus is on the risk and prohibition related to the AI system's use, not on broader ecosystem updates or responses to past incidents.
Thumbnail Image

防洩密 公務機關設限DeepSeek | 聯合新聞網

2025-01-31
UDN
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek AI) whose use is restricted by government authorities due to concerns about potential harm to national cybersecurity through data leakage and cross-border transmission. Although no actual harm is reported as having occurred, the article clearly indicates a plausible risk of harm to national information security if the AI system is used. Therefore, this constitutes an AI Hazard, as the development and use of the AI system could plausibly lead to an AI Incident involving harm to critical infrastructure and national security.
Thumbnail Image

公務機關禁DeepSeek 他批數發部可笑:製造無意義恐慌 | 聯合新聞網

2025-02-01
UDN
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek AI product) and a government directive limiting its use due to cybersecurity concerns, which implies a potential risk of harm (information leakage, national security threats). However, the article does not describe any actual harm or incident caused by the AI system. The concerns are precautionary and speculative, and the article mainly reports on the advisory and public reactions. Therefore, this qualifies as an AI Hazard, as the use or development of the AI system could plausibly lead to harm, but no incident has occurred yet.
Thumbnail Image

數發部稱公務機關不得使用DeepSeek 藍委:需釐清管制範圍 | 政治快訊 | 要聞 | NOWnews今日新聞

2025-02-01
NOWnews 今日新聞
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (DeepSeek AI model) and concerns about its cybersecurity risks, leading to government-imposed usage restrictions. However, no direct or indirect harm has occurred yet; the restrictions are preventive to avoid potential security breaches. Therefore, this qualifies as an AI Hazard because the AI system's use could plausibly lead to harm (cybersecurity incidents) if not controlled. It is not an AI Incident since no harm has materialized, nor is it merely complementary information as the main focus is on the potential risk and regulatory response rather than updates on past incidents.
Thumbnail Image

公部門限DeepSeek 綠委:不能只禁、也要投入發展AI | 聯合新聞網

2025-02-01
UDN
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions DeepSeek as an AI product and the government's ban is motivated by cybersecurity and national security concerns, implying a credible risk of harm if the AI system is used. There is no report of actual harm occurring yet, only a preventive measure. The involvement is in the use and potential misuse of the AI system. Hence, it fits the definition of an AI Hazard, as the event describes a circumstance where the AI system's use could plausibly lead to harm, but no incident has yet occurred.
Thumbnail Image

公部門限DeepSeek 李彥秀:公立大學也要恐影響學術 | 聯合新聞網

2025-02-01
UDN
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI product (DeepSeek) and government-imposed restrictions due to cybersecurity concerns, indicating potential risks associated with its use. However, there is no report of actual harm or incident caused by the AI system. The concerns raised are about possible future impacts on academic research and learning if restrictions are broadly applied. Therefore, this situation fits the definition of an AI Hazard, as the AI system's use could plausibly lead to harm (e.g., cybersecurity breaches) or significant disruption if not properly managed, but no incident has yet occurred.
Thumbnail Image

公務機關禁用DeepSeek 藍委:數發部應釐清管制範圍 | 聯合新聞網

2025-02-01
UDN
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek) and concerns its use due to cybersecurity risks, which is a plausible risk of harm (an AI Hazard). There is no indication that any harm has already occurred, only that the government is restricting use to prevent potential harm. The legislator's call for clarification is a governance response. Therefore, this event is best classified as an AI Hazard, as it reflects a credible potential risk from the AI system's use but no realized harm yet.
Thumbnail Image

公部門禁「深索」 藍委憂因噎廢食 | 聯合新聞網

2025-02-01
UDN
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek AI model) and concerns its use in public sector and critical infrastructure. The restriction is based on cybersecurity risk, implying a plausible risk of harm (AI Hazard) rather than a realized harm. There is no indication that harm has already occurred, only that the AI system's use could plausibly lead to cybersecurity incidents. The discussion focuses on governance and risk management responses to this potential threat, fitting the definition of an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

防資安風險!數發部:公務機關不得使用Deepseek

2025-01-31
UDN
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (DeepSeek AI product) and discusses its potential cybersecurity risks and national security implications. The government is issuing warnings and restrictions to prevent possible data leaks and harm to national information security. Since no actual harm has occurred yet but there is a credible risk of harm if the AI system is used, this qualifies as an AI Hazard. It is not an AI Incident because no realized harm is reported. It is not Complementary Information because the main focus is on the risk and policy action rather than updates or responses to a past incident. It is not Unrelated because the AI system and its risks are central to the event.
Thumbnail Image

數發部開第一槍!我國公務機關不得使用DeepSeek 防範資安風險 | 頭條焦點 | NOWnews今日新聞

2025-01-31
NOWnews 今日新聞
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek) whose use is restricted by government authorities due to concerns about cybersecurity risks and potential data leakage. Although no actual harm has been reported, the article clearly indicates that the use of this AI system could plausibly lead to harm to national information security, which falls under harm to critical infrastructure and possibly harm to communities. Therefore, this is an AI Hazard, as the event describes a credible risk of harm from the AI system's use, prompting preventive restrictions.
Thumbnail Image

DeepSeek在意大利蘋果和谷歌商店被屏蔽 | deepseek | AI | 人工智能 | 大紀元

2025-01-30
The Epoch Times
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system (an AI assistant app) whose use and data processing are under investigation by European regulators for compliance with data protection laws and potential risks like bias and election interference. The app has been blocked in Italy pending investigation. While there are concerns about possible harms (privacy violations, bias, election interference), the article does not report any actual harm occurring yet. Therefore, this situation represents a plausible risk of harm due to the AI system's use and data handling, qualifying it as an AI Hazard rather than an AI Incident. The regulatory actions and app blocking are responses to these potential risks, not evidence of realized harm.
Thumbnail Image

中國DeepSeek掀疑慮 世界主要國家紛紛採取應對

2025-01-31
新頭殼 Newtalk
Why's our monitor labelling this an incident or hazard?
The article centers on the potential and ongoing risks posed by DeepSeek's AI models, including data privacy violations and national security concerns. While it does not report a specific realized harm event (such as a data breach or direct injury), the widespread governmental actions and warnings indicate a credible risk that the AI system could lead to harms such as violations of data protection laws and privacy rights. Therefore, the event qualifies as an AI Hazard because it plausibly could lead to an AI Incident involving violations of rights and data security breaches. The article mainly discusses the risk and regulatory responses rather than a concrete incident of harm, so it is not an AI Incident. It is more than just complementary information because the focus is on the credible risk and governmental measures taken to prevent harm.
Thumbnail Image

防範國家資安風險 數發部禁止公務機關使用 Deepseek AI | 政治 | Newtalk新聞

2025-01-31
新頭殼 Newtalk
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek AI) whose use is restricted by government authorities due to concerns about data security and potential information leakage that could harm national cybersecurity. Although no actual harm is reported, the article clearly states that the use of this AI system could plausibly lead to harm (information leakage, national security risks). Therefore, this constitutes an AI Hazard, as the development and use of this AI system could plausibly lead to an AI Incident involving harm to critical infrastructure and national security.
Thumbnail Image

台灣禁公務機關用DeepSeek (18:34) - 20250131 - 兩岸

2025-01-31
明報新聞網 - 每日明報 daily news
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system developed in mainland China, and its use involves AI-generated processing of data. The warning is based on the plausible risk that using this AI system could lead to cybersecurity incidents involving data breaches or unauthorized data transmission, which could harm information security and potentially affect critical infrastructure. Although no actual harm is reported yet, the event highlights a credible risk of harm from the AI system's use, fitting the definition of an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

DeepSeek涉資安疑慮 數發部:公務機關不得使用 | 產經 | 中央社 CNA

2025-01-31
Central News Agency
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek) whose use is restricted due to plausible risks of data leakage and national security harm. The article does not describe any realized harm or incident but highlights a credible potential threat from the AI system's use in sensitive government contexts. Therefore, this qualifies as an AI Hazard, as the development and use of the AI system could plausibly lead to an AI Incident involving harm to critical infrastructure or national security. It is not Complementary Information because the main focus is on the risk warning, not on updates or responses to a past incident. It is not an AI Incident because no harm has occurred yet.
Thumbnail Image

公務機關禁用DeepSeek 藍委:數發部應釐清管制範圍 | 政治 | 中央社 CNA

2025-02-01
Central News Agency
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek AI model) and concerns about its use due to cybersecurity risks. However, the article does not describe any realized harm or incident caused by the AI system, only a precautionary ban to prevent potential security risks. This constitutes a plausible risk of harm (cybersecurity/data security issues) but no direct or indirect harm has yet occurred. Therefore, this is best classified as an AI Hazard, as the AI system's use could plausibly lead to harm if not controlled, but no incident has been reported.
Thumbnail Image

義大利憂個資外洩 要DeepSeek說明資料是否存中國 | 國際 | 中央社 CNA

2025-01-29
Central News Agency
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek's AI platform) and concerns about personal data privacy and protection, which relates to potential violations of fundamental rights. However, the article describes a regulatory request for information and investigation rather than an actual realized harm or incident. There is no indication that harm has occurred yet, only a concern and a request for clarification. Therefore, this is best classified as Complementary Information, as it provides context and updates on governance and oversight related to AI data privacy risks, without reporting a concrete AI Incident or an imminent AI Hazard.
Thumbnail Image

台灣基於國家安全考慮 禁止公務機關使用DeepSeek(圖) - 時政聚焦 -

2025-02-01
看中国
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (DeepSeek's AI model) and discusses its use and potential risks. Although no direct harm has yet occurred, the government's ban is motivated by credible concerns that the AI system could lead to information security breaches and harm national security. This fits the definition of an AI Hazard, as the AI system's use could plausibly lead to an AI Incident involving harm to critical infrastructure or national security. The event is not an AI Incident because no realized harm is reported, nor is it merely complementary information or unrelated news.
Thumbnail Image

意大利蘋果等應用商店未能下載DeepSeek 暫未有原因

2025-01-29
on.cc東網
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI model application, so an AI system is involved. The event concerns regulatory actions regarding data privacy compliance, which is part of the AI system's development and use. No direct or indirect harm has been reported yet, but the regulatory inquiry suggests a plausible risk of legal or rights-related harm if issues are found. Therefore, this situation represents an AI Hazard, as the AI system's use could plausibly lead to violations of data protection laws or rights if unresolved.
Thumbnail Image

韓要求DeepSeek交代如何處理個資 台禁公務機關使用

2025-02-01
on.cc東網
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system (an AI model) whose use involves processing user personal data. The regulatory scrutiny and prohibitions stem from concerns about potential mishandling of personal data and cross-border data transmission risks, which could plausibly lead to violations of privacy rights and data breaches. Since no actual harm is reported but credible concerns and preventive actions are described, this event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

DeepSeek︱西方媒體質問:為何中國科技一再讓西方措手不及?

2025-02-01
std.stheadline.com
Why's our monitor labelling this an incident or hazard?
The article centers on the competitive landscape and geopolitical implications of Chinese AI advancements, particularly DeepSeek, and Western reactions. It does not report any concrete harm or incident caused by AI systems, nor does it identify a credible risk of harm from AI use or malfunction. The focus is on analysis, commentary, and reporting on responses such as bans and investigations, which are governance and societal responses. Therefore, the event is best classified as Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

DeepSeek|意大利Google及蘋果商店無法下載

2025-01-30
std.stheadline.com
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system (an AI platform) whose use involves processing personal data. The regulatory authority's investigation and the app's removal from stores are responses to potential violations of data privacy laws and concerns about harms such as bias and election interference, which relate to violations of rights and harm to communities. However, the article does not report any realized harm yet; it focuses on the regulatory action and investigation. Therefore, this event represents a plausible risk of harm due to the AI system's use and is best classified as Complementary Information, as it provides context on governance and regulatory responses rather than reporting a direct or indirect AI Incident or an AI Hazard.
Thumbnail Image

DeepSeek引發資安疑慮 專家:這正是國際對中國最不信任的地方

2025-02-01
Rti 中央廣播電臺
Why's our monitor labelling this an incident or hazard?
The article centers on potential security and privacy risks associated with the use of an AI system (DeepSeek) and governmental bans to mitigate these risks. There is no direct or indirect evidence of actual harm occurring due to DeepSeek's use, only plausible future risks related to data privacy and surveillance. Therefore, this qualifies as an AI Hazard, as the development and use of DeepSeek could plausibly lead to incidents involving data breaches or violations of privacy rights, but no incident has been reported yet.
Thumbnail Image

公務機關禁DeepSeek 藍委:數發部應釐清管制範圍

2025-02-01
Rti 中央廣播電臺
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (DeepSeek AI model) and its use by public institutions. However, the event centers on a government-imposed restriction due to potential cybersecurity risks, without any realized harm or incident reported. The concerns are about possible future risks and the scope of regulation, which fits the definition of an AI Hazard. Since the article mainly discusses the regulatory response and the need for clarification rather than an actual incident or harm, it is best classified as an AI Hazard.
Thumbnail Image

DeepSeek引資安疑慮 數發部提醒公務機關不得使用

2025-01-31
Rti 中央廣播電臺
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (DeepSeek AI) and highlights concerns about its use leading to potential cybersecurity risks, including data leakage and cross-border transmission of sensitive information. Although no actual harm has been reported yet, the government's warning and restrictions indicate a credible risk that the AI system's use could plausibly lead to harm to national information security, which falls under harm to critical infrastructure management and operation. Therefore, this event qualifies as an AI Hazard rather than an AI Incident, as the harm is potential and preventive measures are being implemented.
Thumbnail Image

DeepSeek禁不禁 美國務院:將限制使用有風險工具

2025-02-01
Rti 中央廣播電臺
Why's our monitor labelling this an incident or hazard?
DeepSeek's AI models are explicitly mentioned as the AI systems involved. The event centers on the use of these AI systems and the associated risks to data security and privacy, which could lead to violations of rights and harm to communities. Since no actual harm has been reported yet, but credible concerns and restrictions are in place to prevent potential harm, this qualifies as an AI Hazard. The article primarily discusses the plausible future harm and the governance responses to mitigate these risks, rather than reporting an incident where harm has already occurred.
Thumbnail Image

憂資安風險 南韓將要求DeepSeek說明用戶個資管理

2025-01-31
Rti 中央廣播電臺
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (DeepSeek's AI model) and concerns about its use of personal data, which relates to privacy and data protection. However, there is no indication that any actual harm (such as data breaches, misuse, or violations) has occurred yet. The authorities are seeking information and considering preventive measures, which points to a potential risk rather than a realized incident. Therefore, this event is best classified as Complementary Information, as it provides context on governance and regulatory responses to AI-related privacy concerns without reporting a specific AI Incident or Hazard.
Thumbnail Image

DeepSeek AI資安疑慮 數發部警示公部門限制使用

2025-02-01
公共電視
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (DeepSeek) and discusses its use in public sector contexts. The warnings and restrictions stem from concerns about cybersecurity risks and potential sensitive data leakage, which constitute harm to national security and potentially to communities or property (through information security breaches). Although no specific incident of data leakage is reported, the warnings and restrictions indicate a credible risk of harm due to the AI system's use. This fits the definition of an AI Hazard, as the AI system's use could plausibly lead to an AI Incident involving harm to national security and sensitive information confidentiality. The article does not describe an actual realized harm but focuses on the potential risks and preventive measures. Therefore, the event is best classified as an AI Hazard.
Thumbnail Image

DeepSeek涉資安疑慮 數發部:公務機關不得使用 | 科技 | 三立新聞網 SETN.COM

2025-01-31
三立新聞
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek AI model) whose use is restricted by government authorities due to credible concerns about information security risks that could lead to harm to national critical infrastructure and data confidentiality. Although no direct harm is reported as having occurred, the warning and restriction indicate a plausible risk of harm if the AI system were used in sensitive government contexts. Therefore, this qualifies as an AI Hazard because the AI system's use could plausibly lead to an AI Incident involving disruption or breach of critical infrastructure security and data confidentiality.
Thumbnail Image

新聞幕後/DeepSeek何以讓世界懼怕?隱私風險因中共這動作 成全球噩夢 | 國際 | 三立新聞網 SETN.COM

2025-02-01
三立新聞
Why's our monitor labelling this an incident or hazard?
The article clearly involves an AI system (DeepSeek) that processes user data and operates under a legal framework (China's Data Security Law) that could lead to government access to private data without safeguards. This creates a credible risk of harm to privacy and data security, which are violations of rights under the OECD framework. Since no actual harm or incident is reported, but the potential for harm is emphasized, this qualifies as an AI Hazard. The article also discusses regulatory responses and global concerns, but the main focus is on the plausible future harm from DeepSeek's operation under Chinese law, not on a completed incident or complementary information about responses to a past incident.
Thumbnail Image

多國設限後 日本就DeepSeek表態 - 香港文匯網

2025-01-31
香港文匯網
Why's our monitor labelling this an incident or hazard?
The article does not describe any direct or indirect harm caused by the AI system, nor does it indicate any incident or malfunction. It mainly covers governmental monitoring and intention to respond appropriately to AI risks, which aligns with providing contextual or governance-related information. Therefore, it fits the category of Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

台灣出於安全考慮 禁止公務機關使用DeepSeek

2025-01-31
美國之音
Why's our monitor labelling this an incident or hazard?
An AI system (DeepSeek's large language model) is explicitly involved. The event concerns the use of this AI system by government agencies and the associated cybersecurity risks. The Taiwanese government has prohibited its use to prevent potential data breaches and national security threats, indicating a plausible risk of harm. Since no actual harm has occurred yet, but the risk is credible and the government is acting to prevent it, this qualifies as an AI Hazard rather than an AI Incident. The article focuses on the potential threat and preventive policy rather than reporting a realized harm or incident.
Thumbnail Image

10:38:12台當局禁止公務機關使用DeepSeek

2025-02-01
hkcd.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (DeepSeek's AI model) and a governmental response restricting its use due to security concerns. However, there is no indication that any actual harm has occurred yet. The focus is on preventing possible future risks related to data security. Therefore, this event represents an AI Hazard, as the AI system's use could plausibly lead to harm, but no incident has been reported.
Thumbnail Image

台當局要求公務機關禁用DeepSeek引爭議  17:56

2025-02-02
hkcna.hk
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system explicitly mentioned. The ban is due to concerns about data leakage and cross-border data transmission, which are plausible risks that could lead to harm such as violations of data privacy or information security incidents. However, the article does not report any actual harm or incident caused by DeepSeek's use. The event is about a preventive measure and the surrounding political controversy, not a realized AI Incident. Therefore, it fits the definition of an AI Hazard, as the AI system's use could plausibly lead to harm, but no harm has yet occurred.
Thumbnail Image

數發部:公務機關不得使用 DeepSeek AI 服務

2025-01-31
TechNews 科技新報
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (DeepSeek AI) and addresses its use in public agencies. The advisory is based on concerns about potential cybersecurity risks and data leakage, which could plausibly lead to harm to national information security. However, there is no indication that any harm has yet occurred. Therefore, this event constitutes an AI Hazard, as it concerns plausible future harm from the use of the AI system rather than an actual incident. It is not Complementary Information because the main focus is on the risk and restriction itself, not on updates or responses to a past incident.
Thumbnail Image

數發部稱公務機關禁DeepSeek 藍委說話了

2025-02-01
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek) and concerns about its cybersecurity risks, leading to a government-imposed restriction to prevent potential harm. Since no realized harm or incident is described, but there is a credible risk that use of the AI system could lead to security breaches, this qualifies as an AI Hazard. The article does not report an AI Incident or Complementary Information, as it is not about a response to a past incident or a broader governance development beyond the restriction itself. Therefore, the classification is AI Hazard.
Thumbnail Image

Italian Regulator Blocks DeepSeek Over Personal Data Concerns

2025-02-01
NOQ Report - News, Opinions, and Questions for Conservatives and Christians
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek) whose use raises concerns about violations of applicable data protection laws and fundamental rights related to personal data privacy. Although no direct harm is reported as having occurred, the regulator's blocking of access indicates a response to potential or ongoing violations. Since the article focuses on regulatory action and concerns about legal compliance rather than a realized harm incident, this qualifies as Complementary Information providing context on governance and societal responses to AI-related privacy issues.
Thumbnail Image

Italy Blocks DeepSeek Over Data Concerns | Silicon UK Tech News

2025-02-03
Silicon UK
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (DeepSeek's AI chatbot) that processes personal data and uses it for training AI models. The Italian regulator's order to stop data processing and the opening of a probe indicate concerns about potential violations of data protection laws, which are legal frameworks protecting fundamental rights. No actual harm or violation is confirmed yet, but the credible risk of such harm justifies classification as an AI Hazard. The event does not describe realized harm or incident but highlights a plausible future risk due to insufficient compliance with data protection regulations.
Thumbnail Image

Italy moves to ban DeepSeek amid data privacy concerns

2025-02-03
SC Media
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek chatbot) whose use has been prohibited by a regulatory authority due to concerns about unlawful data collection and insufficient transparency. This constitutes a violation of data protection and privacy rights, which falls under violations of human rights or breach of applicable law protecting fundamental rights. Since the harm is realized (the ban is a response to unlawful practices), this qualifies as an AI Incident.
Thumbnail Image

Italy Halts DeepSeek Chatbot Amid Data Privacy Concerns | Technology

2025-02-04
Devdiscourse
Why's our monitor labelling this an incident or hazard?
The event centers on the use of an AI system (DeepSeek's chatbot) and regulatory intervention due to privacy concerns. The suspension is a response to the company's failure to clarify data collection and storage practices, which could lead to violations of privacy rights if unaddressed. Since no direct harm or incident has been reported, but there is a credible risk of harm due to non-compliance, this qualifies as an AI Hazard. The event does not describe an actual incident of harm but a plausible risk that the AI system's use could lead to violations of rights if continued without compliance.
Thumbnail Image

Italy blocks Chinese AI app DeepSeek

2025-02-04
ARY NEWS
Why's our monitor labelling this an incident or hazard?
The DeepSeek AI chatbot is an AI system whose use is being restricted due to concerns about privacy policy compliance and data protection, which are legal rights violations if realized. The Italian regulator's blocking order and investigation indicate that the AI system's use could plausibly lead to violations of fundamental rights (privacy) if allowed to operate without adequate safeguards. Since no actual harm or breach has been reported yet, and the action is preventive, this fits the definition of an AI Hazard. The event is not merely complementary information because the blocking and investigation are direct responses to the AI system's use and potential risks. It is not unrelated because the AI system is central to the event and the regulatory action.
Thumbnail Image

UPDATE 3-Italy's regulator blocks Chinese AI app DeepSeek on data...

2025-02-04
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek chatbot) whose use has led to regulatory intervention due to concerns over privacy violations and insufficient compliance with data protection laws. The regulator's blocking order and investigation indicate that the AI system's use has directly or indirectly led to a breach of obligations under applicable law protecting fundamental rights (privacy). Therefore, this is an AI Incident as per the definitions, since the AI system's use has caused a violation of rights and legal obligations.
Thumbnail Image

Italy's regulator blocks Chinese AI app DeepSeek on data protection By Reuters

2025-02-04
Investing.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek chatbot) whose use raised concerns about data protection and privacy compliance. The regulator's blocking order and investigation aim to prevent violations of users' data privacy rights, which are fundamental rights under applicable law. Since no actual harm or rights violation has been reported as having occurred yet, but there is a credible risk of such harm if the AI system continues operating without adequate safeguards, this constitutes an AI Hazard. The event is not merely general AI news or a product launch, but a regulatory action addressing plausible future harm from the AI system's use.
Thumbnail Image

Italy's regulator blocks Chinese AI app DeepSeek on data protection

2025-02-04
ThePrint
Why's our monitor labelling this an incident or hazard?
The article describes a regulatory intervention against an AI system (DeepSeek's chatbot) due to concerns about privacy policy and data protection compliance. While the AI system is involved and there is a legal framework protecting fundamental rights (privacy), the article does not indicate that any actual harm or violation has occurred yet. The blocking is a precautionary measure and investigation is ongoing. Therefore, this event is best classified as Complementary Information, as it provides important context on governance and regulatory responses to AI systems but does not describe a specific AI Incident or AI Hazard with realized or plausible harm at this stage.
Thumbnail Image

Italy's regulator blocks Chinese AI app DeepSeek on data protection

2025-02-04
Yahoo! Finance
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (DeepSeek chatbot) and concerns about its use of personal data, which relates to human rights and legal obligations under data protection laws. However, no actual harm or violation has been reported as having occurred; instead, the regulator has taken preventive action by blocking the app and opening an investigation. This fits the definition of Complementary Information, as it details a governance response to AI-related privacy concerns without describing a realized AI Incident or a plausible future harm (AI Hazard).
Thumbnail Image

DeepSeek провалився на 100 відсотків -- що не так зі ШІ

2025-02-03
ФОКУС
Why's our monitor labelling this an incident or hazard?
DeepSeek R1 is an AI system (a chatbot based on a large language model). The article reports that it failed all safety tests designed to prevent harmful outputs, meaning it directly enabled generation of harmful content such as cybercrime and disinformation, which constitute harm to communities and potential legal violations. The failure of internal safety mechanisms is a malfunction of the AI system leading to harm. The accusations of data theft and cyberattacks further indicate misuse and legal breaches related to AI development and deployment. Hence, the event meets the criteria for an AI Incident as the AI system's malfunction and use have directly led to harm.
Thumbnail Image

В Італії заблокували китайський ШІ-додаток DeepSeek: причина

2025-02-04
Економічна правда
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (DeepSeek's chatbot) and discusses regulatory scrutiny and blocking due to privacy concerns, which relate to legal obligations protecting rights. However, no actual harm or incident (such as data misuse or breach) is reported, only regulatory action and concerns. This fits the definition of Complementary Information, as it details governance responses and regulatory measures addressing AI system risks, rather than describing a realized AI Incident or a plausible AI Hazard.
Thumbnail Image

Китайський ШІ DeepSeek пройшов перевірку маркетів, але невідомо, що в нього "під капотом" - Мінцифри | УНН

2025-02-04
unn.ua
Why's our monitor labelling this an incident or hazard?
The article does not describe any direct or indirect harm caused by the AI system DeepSeek, nor does it report any incident or malfunction. It highlights potential security concerns and the need for further technical analysis, which implies a plausible risk but no confirmed harm yet. Therefore, this qualifies as Complementary Information, as it provides context and updates about the AI system's status and security considerations without reporting an AI Incident or AI Hazard.
Thumbnail Image

В Італії заблокували ШІ-додаток DeepSeek

2025-02-04
Украинская сеть новостей
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI chatbot, thus an AI system is involved. The regulatory action stems from concerns about the use of personal data, which relates to violations of data protection laws and potentially fundamental rights to privacy. However, the article does not report any realized harm such as data breaches or privacy violations that have already occurred, only regulatory concerns and preventive blocking. Therefore, this event represents a plausible risk of harm related to AI use, qualifying it as an AI Hazard rather than an AI Incident. It is not merely complementary information because the blocking order is a direct regulatory response to potential harm, not just an update or context.
Thumbnail Image

За завантаження DeepSeek загрожує 20 років в'язниці -- причини

2025-02-04
ФОКУС
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek) and discusses legislative measures targeting its use and import due to concerns about security and economic impact. However, no direct or indirect harm caused by the AI system is reported as having occurred. The focus is on potential risks and regulatory responses, which fits the definition of Complementary Information as it provides context and governance responses to AI developments without describing a specific AI Incident or Hazard.
Thumbnail Image

DeepSeek провалився на 100%: китайський ШІ не пройшов жодного тесту на безпеку

2025-02-03
InternetUA
Why's our monitor labelling this an incident or hazard?
DeepSeek R1 is an AI system (a large language model chatbot) that was tested for safety against harmful prompts. The failure to block any harmful requests means the AI system's use has directly led to a significant risk of harm, including cybercrime and misinformation. The article describes realized vulnerabilities and the AI's inability to prevent harm, which fits the definition of an AI Incident. The harm categories mentioned (cybercrime, misinformation, illegal activities) align with harms to communities and violations of law. The failure is not hypothetical but demonstrated through testing, indicating direct or indirect harm potential. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Razlog je zaštita podataka: Italija blokirala pristup poznatoj aplikaciji

2025-01-31
radiosarajevo.ba
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (DeepSeek) whose use of personal data is under scrutiny by data protection authorities. The blocking of the app is a preventive measure due to insufficient information about data processing, indicating a potential risk of violation of data protection rights. However, there is no indication that harm has already occurred, only that it could plausibly occur if the issues are not resolved. Therefore, this qualifies as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Italija povukla DeepSeek: Kineska AI aplikacija pod istragom

2025-01-31
Bljesak.info
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI application, and the regulatory investigation is due to its potential non-compliance with GDPR, which protects fundamental rights related to data privacy. The event describes an ongoing investigation and regulatory measures due to possible legal violations caused by the AI system's use of personal data. This fits the definition of an AI Incident because the AI system's use has directly or indirectly led to a breach of obligations under applicable law intended to protect fundamental rights. The harm is legal and rights-based, and the investigation and app removal indicate materialized concerns rather than mere potential risk.
Thumbnail Image

Nekoliko evropskih regulatora istražuje kinesku AI aplikaciju DeepSeek zbog zaštite podataka

2025-01-31
oslobodjenje.ba
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (DeepSeek) whose use and data processing are under investigation by European regulators for compliance with data protection laws and potential risks such as bias and election interference. While these concerns indicate plausible risks of harm (privacy violations, rights breaches, manipulation), no actual harm or incident has been confirmed or reported. Therefore, this situation fits the definition of an AI Hazard, as the AI system's development or use could plausibly lead to an AI Incident, but no direct or indirect harm has yet occurred.
Thumbnail Image

Nekoliko evropskih regulatora istražuje kinesku AI aplikaciju DeepSeek zbog zaštite podataka

2025-02-01
Haber.ba
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (DeepSeek AI application) and regulatory scrutiny regarding its data processing practices and potential risks such as bias and election interference. However, there is no indication that any harm has already occurred. The event centers on the plausible risk of harm due to the AI system's use and compliance with data protection laws, making it an AI Hazard. It is not Complementary Information because the focus is on the investigation and potential risks, not on responses to a past incident. It is not an AI Incident because no realized harm or violation has been confirmed or reported yet.
Thumbnail Image

Nekoliko evropskih regulatora istražuje kinesku AI aplikaciju DeepSeek zbog zaštite podataka

2025-01-31
Avaz.ba
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (DeepSeek AI application) whose use is under regulatory scrutiny for potential violations of data protection and privacy laws, as well as risks of bias and election interference. Although no actual harm has been confirmed or reported, the regulators' investigations and app removal indicate a credible risk that the AI system's use could lead to harms such as violations of fundamental rights (privacy), harm to communities (election interference), or bias-related harms. Therefore, this situation fits the definition of an AI Hazard, as the AI system's use could plausibly lead to an AI Incident but no realized harm is yet documented.