DeepSeek AI Data Breach Exposes Sensitive Information

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Wiz, a cybersecurity firm, discovered a security vulnerability in DeepSeek, a Chinese AI startup, exposing sensitive data online. The breach included chat records and API secrets, allowing unauthorized access. DeepSeek quickly addressed the issue, but concerns remain about data privacy, prompting inquiries from Italian and Australian regulators.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves an AI system (DeepSeek) whose data was exposed due to an unprotected database. The exposure of sensitive user inputs is a direct harm to users' privacy and potentially a violation of data protection laws, which falls under violations of human rights or breach of obligations under applicable law. Therefore, this qualifies as an AI Incident because the AI system's use and data management directly led to harm through data exposure.[AI generated]
AI principles
Privacy & data governanceRobustness & digital securityAccountabilityTransparency & explainabilityRespect of human rights

Industries
Digital securityIT infrastructure and hosting

Affected stakeholders
Consumers

Harm types
Human or fundamental rightsReputational

Severity
AI incident

AI system task:
Interaction support/chatbotsContent generation


Articles about this incident or hazard

Thumbnail Image

Datenbank mit sensiblen DeepSeek-Daten stand offen im Netz - WELT

2025-01-30
DIE WELT
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek) whose data was exposed due to an unprotected database. The exposure of sensitive user inputs is a direct harm to users' privacy and potentially a violation of data protection laws, which falls under violations of human rights or breach of obligations under applicable law. Therefore, this qualifies as an AI Incident because the AI system's use and data management directly led to harm through data exposure.
Thumbnail Image

Künstliche Intelligenz: Datenbank mit sensiblen DeepSeek-Daten stand offen im Netz

2025-01-30
General-Anzeiger Bonn
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek) whose sensitive user data was exposed due to an unprotected database. The exposure of sensitive personal data can be considered a violation of rights under applicable data protection laws, which fits the definition of an AI Incident. Although no direct physical harm is reported, the breach of privacy and potential misuse of exposed data represent significant harm linked to the AI system's development or use.
Thumbnail Image

Datenbank mit sensiblen DeepSeek-Daten stand offen im Netz

2025-01-30
wallstreet:online
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek) whose sensitive data was exposed due to an unprotected database. The exposure of sensitive user data constitutes a violation of privacy and potentially human rights, which is a harm under the AI Incident definition. The AI system's development and use led to this harm indirectly through poor data security practices. Therefore, this qualifies as an AI Incident.
Thumbnail Image

Künstliche Intelligenz: Datenbank mit sensiblen DeepSeek-Daten stand offen im Netz

2025-01-30
stuttgarter-zeitung.de
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions a database related to DeepSeek, an AI company, was left open and accessible on the internet, which is a direct consequence of the AI system's development and operation environment. The exposure of sensitive data linked to the AI system can lead to violations of privacy and potentially other harms. The database was accessible for some time before being secured, and while it is unclear if data was accessed or stolen, the incident itself is a realized security breach involving an AI system. This fits the definition of an AI Incident as it involves harm related to the AI system's use and operation, specifically a breach of obligations under applicable law protecting data privacy and security.
Thumbnail Image

Datenbank mit sensiblen DeepSeek-Daten stand offen im Netz

2025-01-30
mannheimer-morgen.de
Why's our monitor labelling this an incident or hazard?
The event describes a security incident involving an AI system's data being exposed publicly, which is a direct harm to user privacy and a violation of rights. The AI system DeepSeek's database was left unprotected, leading to sensitive user data being accessible to anyone. This fits the definition of an AI Incident because the AI system's development or use led directly to a harm (violation of data privacy and user rights). The subsequent securing of the database does not negate the fact that harm occurred or could have occurred. Therefore, this event is classified as an AI Incident.
Thumbnail Image

Sicherheitslücke bei chinesischer KI-App DeepSeek aufgedeckt

2025-01-30
IT BOLTWISE® x Artificial Intelligence
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI app (DeepSeek) with an AI model involved. The security breach exposed sensitive user data, which is a direct harm related to the AI system's use and deployment. The breach constitutes a violation of user privacy and possibly legal rights, fitting the definition of harm to persons or groups. The AI system's involvement is clear, and the harm has occurred (data exposure). Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

网安公司:DeepSeek敏感资料暴露在网上 | deepseek | Wiz公司 | 安全漏洞 | 大纪元

2025-01-30
The Epoch Times
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek's AI assistant) whose backend data, including user chat logs and sensitive operational details, were exposed due to a security vulnerability. This exposure constitutes a breach of data privacy and protection laws, thus a violation of human rights and legal obligations. The harm is realized as sensitive user data was accessible publicly, posing risks to individuals' privacy and security. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use and its associated infrastructure directly led to a significant harm (privacy violation).
Thumbnail Image

台湾禁止公务机关使用DeepSeek

2025-02-01
早报
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek AI product) and concerns about its use leading to potential harm to national cybersecurity through data leakage and cross-border transmission. However, the article does not report any actual harm or incident caused by the AI system, only the plausible risk of harm. Therefore, this is an AI Hazard, as the use of the AI system could plausibly lead to harm but no harm has yet occurred.
Thumbnail Image

为防止数据风险 美国国务院将限制使用DeepSeek

2025-02-01
internet.cnmo.com
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system involved in data collection and processing. The article details regulatory investigations and government warnings about data security risks, which could plausibly lead to harms such as violations of privacy rights or risks to critical infrastructure data. However, no actual harm or incident has been reported yet, only potential risks and preventive measures. Therefore, this event qualifies as an AI Hazard, as the AI system's use could plausibly lead to an AI Incident involving data risk or security breaches.
Thumbnail Image

发现员工连接中国服务器后 五角大楼已在其部分网络上屏蔽DeepSeek - cnBeta.COM 移动版

2025-01-31
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI chatbot, thus an AI system is involved. The Pentagon's blocking of access is a use-related intervention to prevent potential security and ethical harms. Since no actual harm has been reported, but the action is taken to prevent plausible future harm, this qualifies as an AI Hazard rather than an Incident. The event focuses on the potential risks and preventive measures rather than realized harm.
Thumbnail Image

台湾数位发展部禁止公务机关使用DeepSeek

2025-01-31
Deutsche Welle
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (DeepSeek) and concerns about its use leading to potential harm to national information security, which is a form of harm to critical infrastructure and possibly communities. Since no actual harm has been reported but there is a clear warning about plausible risks, this fits the definition of an AI Hazard. The event is not a realized incident but a credible potential threat due to the AI system's development and use.
Thumbnail Image

日媒:日本政府就DeepSeek表态

2025-01-31
东方财富网
Why's our monitor labelling this an incident or hazard?
The article does not describe any realized harm or incident caused by DeepSeek's AI system, nor does it indicate a plausible imminent harm. It mainly provides information about governmental statements and regulatory attention, which fits the definition of Complementary Information as it supports understanding of AI governance and responses without reporting a new incident or hazard.
Thumbnail Image

关于DeepSeek,日本、法国、意大利等表态了!还有库克、扎克伯格......

2025-01-31
东方财富网
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (DeepSeek's generative AI model) and their use. However, it does not report any direct or indirect harm caused by DeepSeek's development or use. Instead, it details governmental and corporate responses, investigations, and cautious attitudes toward the AI system, reflecting concerns about privacy risks and market competition. Since no harm has materialized but plausible risks are recognized and regulatory actions are underway, this fits the definition of Complementary Information, as it provides updates and context on AI ecosystem responses rather than reporting an AI Incident or AI Hazard.
Thumbnail Image

突上热搜!DeepSeek美国商标 被梁文锋校友抢注!

2025-02-01
东方财富网
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (DeepSeek's AI technology) and discusses its development, use, and associated controversies. However, it does not document any direct or indirect harm caused by the AI system, such as injury, rights violations, or operational disruptions. The trademark dispute, privacy investigations, and cyberattacks are mentioned but without clear linkage to realized harm caused by the AI system itself. The governmental warnings and international reactions are responses to potential risks but do not describe a specific AI Hazard event with plausible future harm detailed. Hence, the article mainly provides complementary information about ongoing AI ecosystem developments, legal disputes, and governance responses related to DeepSeek.
Thumbnail Image

台要求公务机关禁用DeepSeek

2025-02-01
东方财富网
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (DeepSeek's AI products) and concerns about cybersecurity risks related to its use. However, no actual harm or incident has been reported; the directive is a preventive action based on plausible risks. Therefore, this qualifies as an AI Hazard because the AI system's use could plausibly lead to harm (data or information security breaches) if not restricted, but no harm has yet occurred.
Thumbnail Image

DeepSeek刚爆红就负面消息不断 李强看走眼? | deepseek | 深度求索 | 民族主义 | 大纪元

2025-01-31
The Epoch Times
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system (a large language model and AI assistant) whose development and use have led to direct harms: exposure of over one million unprotected sensitive data entries including user messages, potential unauthorized use of OpenAI's technology, and suspected financial crimes involving market manipulation. These constitute violations of data privacy rights and financial regulations, which fall under harms to human rights and legal obligations. The article also mentions ongoing investigations and sanctions, confirming the seriousness of the incident. The AI system's role is pivotal in these harms, as the data leaks and misuse stem from its operation and associated infrastructure. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

DeepSeek在意大利苹果和谷歌商店被屏蔽 | deepseek | AI | 人工智能 | 大纪元

2025-01-30
The Epoch Times
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system (an AI assistant app). The event involves its use and regulatory scrutiny over its data handling and potential societal impacts such as bias and election interference. Although no direct harm is reported yet, the blocking of the app and investigations reflect credible concerns that the AI system could plausibly lead to harms such as violations of data protection rights, bias, and election interference. Therefore, this event fits the definition of an AI Hazard, as it concerns plausible future harms stemming from the AI system's use and compliance issues. It is not an AI Incident because no realized harm is described, nor is it merely complementary information or unrelated news.
Thumbnail Image

让整个纳斯达克都恐慌,DeepSeek到底是何方神圣?-钛媒体官方网站

2025-02-01
tmtpost.com
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system (DeepSeek's AI models) used in quantitative investment that has directly caused a major financial market disruption, leading to substantial economic losses for individuals and companies. This constitutes harm to property and economic interests, fitting the definition of an AI Incident. The AI system's development and use in trading strategies is central to the event, and the harm is realized and significant. Hence, this is not merely a potential hazard or complementary information but a clear AI Incident.
Thumbnail Image

美国对DeepSeek下手了 不得以任何形式下载、安装或使用:多国跟进设限

2025-02-01
驱动之家
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system whose use is being restricted and investigated due to concerns about intellectual property theft, national security, and data privacy. The US and other countries are taking preventive measures and conducting inquiries, which indicates plausible future harm or risks associated with the AI system. Since no actual harm or incident has been reported yet, but credible concerns and regulatory actions are underway, this event fits the definition of an AI Hazard. It is not Complementary Information because the main focus is on the potential risks and regulatory responses rather than updates on past incidents. It is not an AI Incident because no direct or indirect harm has been reported as having occurred.
Thumbnail Image

白宫列入调查名单!美国会办公室被要求禁用DeepSeek:理由赤裸裸

2025-01-31
驱动之家
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (DeepSeek) and governmental responses to its use, focusing on investigation and warnings about potential risks. There is no indication that DeepSeek has caused any direct or indirect harm yet, only that it is under scrutiny and use is being restricted as a precaution. This fits the definition of an AI Hazard, as the AI system's use could plausibly lead to harm or risks that are being investigated, but no incident has occurred.
Thumbnail Image

五角大楼禁用DeepSeek,"有些员工曾为使用连上中国服务器"

2025-01-31
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (DeepSeek's generative AI chatbot) used by U.S. government employees. The use of this AI system connected to Chinese servers in sensitive government environments raises security concerns. Although no direct harm (such as data leaks or espionage) is reported, the potential for such harm is credible and significant, given the context of national security and sensitive information. The DoD's blocking of the website and Congress's restrictions reflect recognition of this plausible risk. Since the article does not report an actual realized harm but focuses on the potential security risks and policy responses, the event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

民进党当局也盯上了DeepSeek

2025-02-01
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (DeepSeek) and concerns about its use leading to information security risks, which could disrupt critical infrastructure. The event stems from the use of the AI system and the potential for harm (data leaks, security breaches). Since no actual harm or incident has occurred yet, but the risk is credible and has led to official restrictions, this fits the definition of an AI Hazard. It is not an AI Incident because no harm has materialized, nor is it Complementary Information or Unrelated.
Thumbnail Image

日本也表态了

2025-01-31
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The article does not describe any realized harm or incident caused by the AI system, nor does it indicate any direct or indirect harm resulting from its use or malfunction. Instead, it focuses on governmental monitoring, policy stance, and international regulatory inquiries, which are responses and contextual information about AI governance. Therefore, this is Complementary Information as it provides updates on societal and governance responses to AI developments without reporting a new incident or hazard.
Thumbnail Image

英国官员正在研究DeepSeek对国家安全的影响

2025-01-30
RFI
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system (a large language model chatbot) whose development and use have raised concerns about data privacy and misinformation, which are linked to potential violations of rights and harm to communities. The article reports ongoing investigations and regulatory inquiries but does not describe any realized harm or incident. Therefore, this situation represents a plausible future risk of harm stemming from the AI system's use and impact, fitting the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

人工智能:韩国监管机构要求 DeepSeek 对个人数据做出解释

2025-01-31
RFI
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek's AI chatbot) whose use and data processing practices have led to regulatory investigations and enforcement actions due to concerns about personal data privacy violations. The involvement of the AI system in processing personal data without adequate compliance with data protection laws directly relates to breaches of fundamental rights protected by law. The regulatory bans and investigations indicate that harm or legal violations have occurred or are ongoing, not just potential risks. Therefore, this is an AI Incident as per the definitions, specifically a violation of rights under applicable law.
Thumbnail Image

评论 1

2025-01-30
guancha.cn
Why's our monitor labelling this an incident or hazard?
The DeepSeek app is an AI system that processes personal data, and its removal from Italian app stores follows regulatory scrutiny over potential violations of data privacy laws and concerns about bias and election interference. While the app is currently unavailable in Italy, no direct harm such as injury, rights violations, or community harm has been reported. The regulatory action and investigation indicate a credible risk that the AI system could lead to harm if used without compliance. Hence, this event fits the definition of an AI Hazard, as it plausibly could lead to an AI Incident if issues are not resolved.
Thumbnail Image

评论 2

2025-01-31
guancha.cn
Why's our monitor labelling this an incident or hazard?
The article details concerns and preventive measures taken by various governments regarding the use of DeepSeek's AI services due to potential information security and data privacy risks. These concerns are about plausible future risks rather than documented harms. There is no indication that the AI system has directly or indirectly caused injury, rights violations, infrastructure disruption, or other harms. Therefore, this event fits the definition of Complementary Information as it provides context on governance responses and risk monitoring without describing a specific AI Incident or AI Hazard.
Thumbnail Image

2025-01-31
guancha.cn
Why's our monitor labelling this an incident or hazard?
The article discusses regulatory scrutiny and responses to a generative AI service, including requests for information and app removals, but does not report any actual harm or incident caused by the AI system. The statements emphasize monitoring and risk management rather than describing an AI incident or hazard. Therefore, this is best classified as Complementary Information, providing context on governance and societal responses to AI developments.
Thumbnail Image

担保公司要求 DeepSeek 提供信息,数据面临风险 - 中国 - Ansa.it

2025-01-28
ANSA.it
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (DeepSeek chatbots) and concerns about data privacy and legal compliance, which could plausibly lead to harm if personal data is mishandled or improperly processed. However, no actual harm or violation has been reported so far; the data protection authority is seeking information to assess risks. Therefore, this is best classified as an AI Hazard, reflecting a credible potential risk to data privacy and legal rights related to AI system use.
Thumbnail Image

DeepSeek为何遭美国"羡慕嫉妒恨" 价廉物美引发震撼

2025-01-31
中华网科技公司
Why's our monitor labelling this an incident or hazard?
The article centers on a dispute over intellectual property and competitive dynamics between AI developers but does not describe any actual harm or plausible future harm caused by DeepSeek's AI system. The use of open-source code and public resources is acknowledged, but no violation or harm is confirmed. The narrative is about market impact and geopolitical reactions rather than an AI Incident or Hazard. Therefore, this is best classified as Complementary Information providing context on AI ecosystem developments and responses.
Thumbnail Image

NASA禁用DeepSeek 安全与隐私考量

2025-02-01
中华网科技公司
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek) and concerns about its use due to security and privacy risks. However, there is no indication that any harm has occurred or that an incident has taken place. The action is a preventive measure to avoid potential risks, which aligns with an AI Hazard classification. The event describes plausible future harm from the use of this AI system, but no realized harm is reported.
Thumbnail Image

台要求公务机关不得用DeepSeek遭讽 又在带风向认知作战

2025-02-01
中华网科技公司
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (DeepSeek's AI model) and discusses its use restrictions due to cybersecurity concerns, which could plausibly lead to harm such as data breaches or information leakage affecting critical infrastructure or public agencies. No direct harm or incident has occurred yet, only warnings and preventive measures. Therefore, this qualifies as an AI Hazard, as the event concerns plausible future harm from the AI system's use, not an actual realized incident. The public debate and criticism do not change the classification, as they relate to the perception of the hazard rather than a new incident or complementary information.
Thumbnail Image

日媒:日本政府就DeepSeek表态 关注AI国际动向

2025-01-31
中华网科技公司
Why's our monitor labelling this an incident or hazard?
The article discusses the development and deployment of an AI system (DeepSeek-R1) and international responses, including data protection inquiries and geopolitical concerns. While it mentions potential risks and strategic impacts, there is no indication of actual harm or incidents caused by the AI system. The Japanese government's statement is about monitoring and responding appropriately, which aligns with a governance or contextual update rather than an incident or hazard. Therefore, this is best classified as Complementary Information, as it provides context and updates on AI developments and international responses without reporting a specific AI Incident or AI Hazard.
Thumbnail Image

台湾基于国家安全考虑 禁止公务机关使用DeepSeek(图) - 时政聚焦 -

2025-02-01
看中国
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (DeepSeek) and discusses its use in government and critical infrastructure. The ban is motivated by concerns over cybersecurity risks and potential information leakage, which could lead to harm to national security (a form of harm to critical infrastructure and information security). Since no actual harm has occurred but the risk is credible and the government is acting to prevent it, this qualifies as an AI Hazard. The event is not a realized incident but a preventive measure against plausible future harm from the AI system's use.
Thumbnail Image

台湾出于安全考量,禁止公务机关使用DeepSeek

2025-01-31
美国之音
Why's our monitor labelling this an incident or hazard?
An AI system (DeepSeek, an AI large language model) is explicitly involved. The event concerns the use of this AI system and the government's decision to prohibit its use in sensitive public sectors due to security concerns. No actual harm has been reported yet, but the potential for harm (data leakage, national security risks) is credible and significant. Therefore, this qualifies as an AI Hazard, as the AI system's use could plausibly lead to harm, prompting preventive action by authorities.
Thumbnail Image

台公务机关禁用DeepSeek 绿委:不仅要禁也要发展AI

2025-02-01
早报
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI products from DeepSeek, a Chinese AI company, being banned due to cybersecurity and national security concerns, indicating the involvement of an AI system. The ban is a preventive measure to avoid potential data leakage or espionage, which would be violations of rights and harm to national security. Since no actual harm has been reported but the risk is credible and recognized by authorities, this fits the definition of an AI Hazard. The article also includes calls for developing domestic AI capabilities, but this is complementary context rather than the main event. Therefore, the event is best classified as an AI Hazard.
Thumbnail Image

台湾以资通安全为由禁止公务机关使用DeepSeek

2025-02-01
早报
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek) and concerns about its use leading to potential harm to national information security, which falls under harm to critical infrastructure management and operation. However, no actual harm or incident has been reported; the ban is a precautionary action to prevent plausible future harm. Therefore, this qualifies as an AI Hazard because the AI system's use could plausibly lead to an AI Incident involving data breaches or security compromises. The article focuses on the risk and regulatory response rather than a realized incident, so it is not Complementary Information or an AI Incident.
Thumbnail Image

韩国数据监管机构要求DeepSeek说明如何管理用户信息

2025-02-01
早报
Why's our monitor labelling this an incident or hazard?
The article describes regulatory authorities in several countries demanding explanations from DeepSeek about how it manages user data, and some have taken precautionary measures like blocking the app. While this indicates concerns about possible violations of privacy rights, no actual harm or breach has been reported or confirmed. The event concerns the potential for harm related to data privacy and AI system use, but it is primarily about investigation and precautionary measures rather than realized harm. Therefore, it fits the definition of Complementary Information, as it provides context and updates on governance responses to AI-related data privacy issues without describing a concrete AI Incident or AI Hazard.
Thumbnail Image

突上热搜!DeepSeek美国商标,被梁文锋校友抢注!-证券之星

2025-02-01
wap.stockstar.com
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (DeepSeek's AI products and models) and discusses its development, use, and geopolitical implications. However, it does not describe any concrete event where the AI system has directly or indirectly caused harm such as injury, rights violations, or disruption. The issues are mainly about trademark disputes, privacy investigations, and political caution or restrictions, which are potential or contextual concerns rather than realized incidents. Therefore, this fits the definition of Complementary Information, as it provides supporting context and updates about the AI ecosystem and governance responses without reporting a new AI Incident or AI Hazard.
Thumbnail Image

台"数发部"要求公务机关不得使用DeepSeek,岛内网友批:又在带风向认知作战

2025-02-01
环球网
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek AI model) and concerns its use by government agencies. The advisory is issued due to potential cybersecurity risks, implying a plausible risk of harm (data leakage, information security breaches) if the AI system is used. However, no actual harm or incident has been reported. Therefore, this qualifies as an AI Hazard, as the advisory highlights a credible potential for harm related to the AI system's use, but no direct or indirect harm has yet occurred.
Thumbnail Image

NASA新规:禁止员工使用中国DeepSeek人工智能技术

2025-02-01
新浪财经
Why's our monitor labelling this an incident or hazard?
The article does not report any actual harm caused by the AI system DeepSeek, nor does it describe a specific event where the AI system's use or malfunction led to harm. Instead, it details a preventive policy measure by NASA to restrict use of the AI system due to potential security and privacy risks. This fits the definition of Complementary Information as it relates to governance and risk management responses in the AI ecosystem rather than a new AI Incident or AI Hazard.
Thumbnail Image

科技圈AI速递:昨夜今晨科技热点一览丨2025年1月30日

2025-01-29
新浪新闻中心
Why's our monitor labelling this an incident or hazard?
The article covers multiple AI-related topics such as AI-driven growth in semiconductor orders, investment in AI robotics startups, and regulatory scrutiny of an AI app. However, none of these points describe an AI Incident or AI Hazard as defined: there is no direct or indirect harm caused or plausible future harm identified from AI system development, use, or malfunction. The DeepSeek app removal is related to privacy concerns but does not specify an AI Incident or Hazard. The rest are business and market updates, which fall under general AI ecosystem developments. Therefore, the article is best classified as Complementary Information.
Thumbnail Image

日媒:日本政府就DeepSeek表态

2025-01-31
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article does not describe any realized harm or incident caused by the AI system, nor does it indicate any direct or indirect harm resulting from its use or malfunction. Instead, it focuses on the government's stance and intention to monitor and respond appropriately to AI developments, which is a governance and policy response. Therefore, this is Complementary Information as it provides context and updates on societal/governance responses to AI without reporting a new incident or hazard.
Thumbnail Image

春节期间,人工智能领域的新星DeepSeek横空出世,迅速火爆全球...

2025-02-01
t.cj.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek) and its development and use, but the article does not describe any realized harm or incident caused by the AI system itself. The US government's actions are political and regulatory responses rather than harms caused by the AI system. The article highlights potential geopolitical and market consequences but does not present any direct or indirect injury, rights violations, or other harms attributable to DeepSeek. Therefore, this is not an AI Incident or AI Hazard. The article mainly provides contextual information about the AI ecosystem, geopolitical tensions, and market dynamics, fitting the definition of Complementary Information.
Thumbnail Image

已有数百家美国公司因担心数据风险而屏蔽DeepSeek - cnBeta.COM 移动版

2025-02-01
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek chatbot) whose use is being restricted due to plausible risks of data leakage to a foreign government, which could lead to violations of privacy and possibly national security concerns. Since no actual harm has occurred yet but there is a credible risk that the AI system's use could lead to significant harm, this qualifies as an AI Hazard. The event does not describe a realized incident but a preventive measure against potential harm.
Thumbnail Image

台湾出于安全考量,禁止公务机关使用DeepSeek

2025-01-31
botanwang.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (DeepSeek's AI models) and concerns about its use leading to potential harm to national information security, a form of harm to critical infrastructure and state security. Since no direct harm has occurred but the government is acting to prevent plausible future harm, this fits the definition of an AI Hazard. The event is not a direct incident because no realized harm is reported, but the risk is credible and significant enough to warrant a ban. The article also mentions related international inquiries, reinforcing the concern about data privacy and security risks from this AI system.
Thumbnail Image

法国监管机构将对DeepSeek进行问询 - cnBeta.COM 移动版

2025-01-31
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (DeepSeek) under investigation by data protection authorities for potential data privacy risks. However, it does not report any realized harm or incident caused by the AI system. The regulatory inquiries and blocking actions are preventive and investigative measures, aiming to assess compliance and mitigate potential risks. Therefore, this event fits the definition of Complementary Information, as it provides updates on governance responses and regulatory scrutiny related to AI without describing a specific AI Incident or AI Hazard.
Thumbnail Image

日本也表态了

2025-01-31
news.bjd.com.cn
Why's our monitor labelling this an incident or hazard?
The article does not describe any specific harm or incident caused by the AI system, nor does it indicate any plausible immediate risk of harm. Instead, it focuses on governmental monitoring and policy considerations, which are responses to the broader AI ecosystem. Therefore, this is best classified as Complementary Information, as it provides context on governance and societal responses to AI developments without reporting a new incident or hazard.
Thumbnail Image

多国设限后,日本就DeepSeek表态

2025-01-31
news.bjd.com.cn
Why's our monitor labelling this an incident or hazard?
The article discusses regulatory and governmental responses to an AI system (DeepSeek's generative AI service) and the potential risks and innovations associated with it. However, it does not describe any realized harm or incident caused by the AI system, nor does it report a specific event where harm occurred or was narrowly avoided. Instead, it focuses on monitoring, policy stance, and international regulatory developments, which are responses and contextual information about AI risks and governance. Therefore, this is Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

DeepSeek应用在意大利下架,此前曾被该国隐私监管机构盯上

2025-01-29
t.cj.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek) whose use of personal data has triggered regulatory scrutiny, indicating potential legal and privacy risks. Since the regulator is investigating and requesting information but no confirmed harm or violation has occurred, this situation represents a plausible risk of harm related to AI use, fitting the definition of an AI Hazard rather than an AI Incident. The removal of the app from stores is a precautionary measure, not evidence of realized harm.
Thumbnail Image

DeepSeek应用在意大利的苹果和Google应用商店已无法获取 - cnBeta.COM 移动版

2025-01-29
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek app) and concerns about its data processing practices potentially violating data protection laws. However, there is no indication that any actual harm (such as injury, rights violations, or other harms) has occurred yet. The regulatory action and app removal are precautionary measures based on potential risks related to data privacy and compliance. Therefore, this situation represents a plausible risk of harm due to AI system use but no realized harm has been reported, fitting the definition of an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

对Deepseek从赞叹到压制 硅谷为何一周内变脸 - cnBeta.COM 移动版

2025-02-01
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek's R1 model) whose development and use have directly led to significant harms: accusations of intellectual property violation (breach of IP rights), cybersecurity attacks against DeepSeek (harm to property and operations), and national security concerns (potential harm to critical infrastructure or rights). The involvement of AI is explicit, and the harms are realized and significant. The geopolitical and regulatory responses further confirm the seriousness of the incident. Thus, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Sensitive data leaked from Chinese chatbot DeepSeek raises security concerns

2025-02-02
Daily Pakistan English News
Why's our monitor labelling this an incident or hazard?
The incident involves an AI system (Deepseek chatbot) whose use and data management practices directly led to a significant data breach exposing sensitive personal and operational data. This constitutes a violation of privacy rights and harm to individuals and communities, fitting the definition of an AI Incident. The breach is realized harm, not just a potential risk, and is directly linked to the AI system's operation and data handling failures.
Thumbnail Image

DeepSeek exposed internal database containing chat histories and sensitive data | TechCrunch

2025-01-30
TechCrunch
Why's our monitor labelling this an incident or hazard?
The exposed database contained chat histories generated by an AI system, indicating AI system involvement. The exposure was due to a misconfiguration, a development or operational error. Although no direct harm is reported, the exposure of sensitive data including API keys and user chats could plausibly lead to privacy violations or other harms if exploited. Since the harm is potential and not confirmed, this event qualifies as an AI Hazard rather than an AI Incident.
Thumbnail Image

Identifican una base de datos desprotegida vinculada a DeepSeek que...

2025-01-30
europapress.es
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek) whose database was left unsecured, leading to exposure of sensitive data. This constitutes a direct harm to users' privacy and security, which falls under violations of rights and harm to individuals. Since the exposure has already occurred, it is a realized harm caused by the AI system's use and management, qualifying as an AI Incident.
Thumbnail Image

DeepSeek sufre su primer gran filtración y más de 1 millón de chats quedan expuestos - La Opinión

2025-01-31
La Opinión
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek's generative AI application) whose database was exposed, leaking sensitive user data and authentication keys. This exposure directly harms users by compromising their privacy and security, fitting the definition of harm to persons or groups (a) and violation of rights (c). The incident stems from the use and management of the AI system's data, and the harm has materialized through the data leak. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

DeepSeek API, chat log exposure a 'rookie' cyber error | Computer W...

2025-01-31
ComputerWeekly.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek, a generative AI model) whose use and deployment led to a security incident exposing sensitive data. The exposure of chat logs and API secrets directly implicates the AI system's operation and user data security, constituting a violation of privacy and potentially other rights. The harm has materialized as sensitive information was publicly accessible, posing risks of misuse and criminal exploitation. Therefore, this qualifies as an AI Incident due to realized harm linked to the AI system's use and security failures.
Thumbnail Image

Sensitive DeepSeek data exposed to web, cyber firm says

2025-01-30
Yahoo
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek's AI assistant) and a data exposure incident related to its infrastructure. The exposure of sensitive data including user prompts and software keys could plausibly lead to harms such as privacy violations or intellectual property breaches, which fall under AI Incident harm categories. However, the article does not report any actual harm occurring, only the exposure and quick remediation. Therefore, this is best classified as an AI Hazard, since the development or use of the AI system led to a circumstance that could plausibly lead to harm, but no harm has yet been reported.
Thumbnail Image

DeepSeek AI platform exposed user data through unsecured database

2025-01-30
SC Media
Why's our monitor labelling this an incident or hazard?
The incident involves an AI system (DeepSeek's AI platform) whose user data and authentication tokens were exposed through a misconfigured database. The exposure of user prompts and authentication credentials directly implicates violations of data protection and privacy rights, which fall under human rights and legal obligations. The event has already occurred, with the database publicly accessible for some time, constituting realized harm or at least a significant risk of harm. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

DeepSeek database with private data and chat logs was exposed to the internet

2025-01-30
Mashable
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek, a large language model-based system) whose development and use led to a security flaw exposing sensitive user data. The exposure of chat logs and secret keys is a direct harm to users' privacy and data security, which falls under violations of human rights and breach of obligations to protect fundamental rights. The AI system's role is pivotal as the data exposure stems from how DeepSeek stores and manages user data. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

DeepSeek Data Exposed to Web, Cybersecurity Firm Says

2025-01-30
The Epoch Times
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (DeepSeek) whose database was left publicly accessible without authentication, exposing sensitive user data and system credentials. This exposure directly led to harm in terms of privacy violations and security risks to users and the system itself. The involvement of the AI system in handling sensitive data and the breach caused by its insecure deployment meets the criteria for an AI Incident. The prompt remediation does not negate the fact that harm occurred or was imminent. Therefore, this event is classified as an AI Incident.
Thumbnail Image

Security researchers found a big hole in DeepSeek's security

2025-01-30
Engadget
Why's our monitor labelling this an incident or hazard?
The exposed database is part of the AI platform DeepSeek, which is a generative intelligence system, thus an AI system. The security breach directly involves the AI system's use and deployment, leading to unauthorized access to sensitive data, which is a violation of user privacy and a breach of obligations under applicable law protecting fundamental rights. Even though no confirmed exploitation occurred, the exposure itself is a realized harm scenario as it directly risks harm to users' rights and privacy. Hence, this is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Over a million lines of DeepSeek chat history was exposed in just a few minutes

2025-01-30
Digital Trends
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek's R1 reasoning model) whose chat data and sensitive information were exposed due to a security vulnerability. The exposure of sensitive chat logs and passwords constitutes a violation of privacy and data protection rights, which falls under violations of human rights or breach of applicable law protecting fundamental rights. The AI system's use and storage of this data is central to the incident. Although no physical harm or disruption is reported, the breach of sensitive data is a significant harm. Hence, this is classified as an AI Incident.
Thumbnail Image

Wiz researchers find sensitive DeepSeek data exposed to internet

2025-01-30
CyberScoop
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek's AI and its backend infrastructure) whose development and use led to a direct security incident exposing sensitive data, which constitutes harm to property and potentially to communities through malicious outputs. The jailbreak vulnerability enabling generation of harmful content further indicates direct harm potential. Since the exposure and vulnerabilities have already occurred and pose real risks, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Italy and Ireland ban DeepSeek on Apple and Google devices

2025-01-30
Cryptopolitan
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system (an AI chatbot) whose use has directly led to harms: privacy violations through mishandling and exposure of personal data, which constitutes a breach of data protection laws (GDPR) and thus a violation of legal obligations protecting fundamental rights. The exposed database represents a security failure causing harm to users' data privacy. The accusations by OpenAI of model theft relate to intellectual property rights violations. These harms have materialized, as evidenced by regulatory bans and investigations. Therefore, this event qualifies as an AI Incident due to realized harms linked to the AI system's use and development.
Thumbnail Image

Más de un millón de líneas del historial de chat de DeepSeek quedaron expuestas

2025-01-30
Digital Trends Español
Why's our monitor labelling this an incident or hazard?
The exposed database belongs to an AI company and contains AI-related chat histories, but the incident is fundamentally a data breach due to poor security configuration, not a failure or misuse of the AI system itself. The harm is to confidentiality and privacy, which can be considered harm to individuals or groups, but the AI system's development or use did not directly or indirectly cause the breach; rather, it was a security misconfiguration. Therefore, this event does not meet the criteria for an AI Incident or AI Hazard. It is best classified as Complementary Information because it provides context on security risks in AI companies and highlights the need for better security practices in AI development and deployment, without describing a direct AI-caused harm or plausible future harm from the AI system itself.
Thumbnail Image

Wiz reveals DeepSeek database exposed API keys, chat histories | Te...

2025-01-30
Patient Engagement
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek's large language models and associated infrastructure) and its use. The exposure of sensitive data including chat histories and API keys directly led to malicious attacks and service disruption, constituting harm to property and potentially to users' privacy and security. The AI system's security failure is a direct contributing factor to these harms. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

DeepSeek Database Left User Data And Chats Exposed

2025-01-30
Techaeris
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek) whose publicly accessible database exposed sensitive user data and system credentials without authentication. This exposure directly harms users and businesses by compromising privacy and security, fulfilling the criteria for harm to persons and violation of rights. The incident stems from the use and deployment of the AI system with inadequate security controls, leading to realized harm. Hence, it is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

DeepSeek AI first data leak discovered by researchers

2025-01-30
Android Headlines
Why's our monitor labelling this an incident or hazard?
The incident involves an AI system (DeepSeek AI chatbot) whose data was leaked, exposing sensitive user information. The leak is directly linked to the AI system's infrastructure and data management, leading to harm in the form of privacy violations and potential misuse of personal data. The harm is realized, not just potential, as the data was publicly accessible for some time. This fits the definition of an AI Incident because the AI system's use and malfunction (insecure data exposure) directly caused harm related to violations of rights and privacy.
Thumbnail Image

DeepSeek allegedly exposed internal database containing users' chat histories & sensitive data - Business & Human Rights Resource Centre

2025-01-30
Business & Human Rights Resource Centre
Why's our monitor labelling this an incident or hazard?
The exposed database contained chat histories generated by an AI system, which implies the involvement of an AI system in handling sensitive personal data. The exposure of this data to the open internet without protection directly leads to a violation of privacy and potentially breaches data protection laws, which falls under violations of human rights or breach of obligations under applicable law. Therefore, this event qualifies as an AI Incident due to the realized harm from the AI system's data handling and security failure.
Thumbnail Image

If You Think Anyone in the AI Industry Has Any Idea What They're Doing, It Appears That DeepSeek Just Accidentally Leaked Its Users' Chats

2025-01-30
Futurism
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved as DeepSeek operates an open-source AI model with user chat data stored in its backend. The vulnerability in the system's security allowed unencrypted access to sensitive data, which is a direct consequence of the AI system's deployment and management. While no actual data breach or harm is reported, the potential for such harm is credible and significant, meeting the criteria for an AI Hazard. The event does not describe realized harm but highlights a plausible risk of harm due to the AI system's insecure handling of sensitive data.
Thumbnail Image

DeepSeek database exposed highly sensitive information - IT Security News

2025-01-31
IT Security News
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI platform, and the exposure of its database containing chat histories and secret keys directly relates to the AI system's use and management. The leak of sensitive user information and backend details constitutes a violation of privacy rights and security obligations, fulfilling the criteria for an AI Incident. The prompt remediation does not negate the fact that harm occurred through the exposure.
Thumbnail Image

DeepSeek Data Leak: Over 1 Million User Chats Exposed Online Amid India Expansion Talks

2025-01-31
Times Now
Why's our monitor labelling this an incident or hazard?
The incident involves an AI system (DeepSeek R1 chatbot) whose database was left publicly accessible, exposing sensitive user data such as chat histories and secret keys. This is a direct harm related to the AI system's use and malfunction (security misconfiguration), leading to violations of privacy and potentially other rights. The harm is realized, not just potential, and the AI system's involvement is explicit. Hence, it meets the criteria for an AI Incident.
Thumbnail Image

Major security flaw in DeepSeek's AI database raises privacy concerns

2025-01-31
Economy Middle East
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek's AI services) whose database was left publicly accessible without authentication, exposing sensitive user data and operational credentials. This exposure directly harms users by compromising their privacy and potentially enabling unauthorized access to internal systems. The harm is realized, not just potential, and relates to violations of data protection and privacy rights. Therefore, this qualifies as an AI Incident because the AI system's use and deployment directly led to harm to individuals' privacy and security.
Thumbnail Image

Sensitive DeepSeek data exposed to web, cyber firm says

2025-01-30
The Jerusalem Post | JPost.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek's AI assistant) and its data being exposed due to a security oversight. However, there is no evidence of realized harm such as injury, rights violations, or disruption caused by this exposure. The data was secured quickly, and no direct or indirect harm is reported. Therefore, this event does not meet the criteria for an AI Incident or AI Hazard. It is best classified as Complementary Information because it provides context about AI system security and the company's response to a vulnerability.
Thumbnail Image

Trove of sensitive DeepSeek data exposed to open internet -- cyber firm

2025-01-30
Europe's Russian gas era ends as Ukraine transit stops
Why's our monitor labelling this an incident or hazard?
The exposed data includes chat logs from an AI assistant, indicating the involvement of an AI system. The incident stems from a security misconfiguration (development/use issue) that led to the exposure of sensitive data, including software keys and user prompts. This exposure directly led to a breach of data security, which is a violation of privacy and could lead to harm to users or the company. Although no explicit harm is reported yet, the breach itself is a realized incident involving an AI system, meeting the criteria for an AI Incident due to indirect harm potential and violation of data protection obligations.
Thumbnail Image

Chinese AI Startup DeepSeek Exposed Sensitive Data, Wiz Reports - EconoTimes

2025-01-30
EconoTimes
Why's our monitor labelling this an incident or hazard?
The exposed data included user prompts to the AI assistant and digital software keys, indicating involvement of an AI system. The exposure of sensitive data constitutes harm to property and potentially to users' privacy and rights. The incident was caused by a malfunction or mismanagement in the AI system's infrastructure. Although the company acted quickly to mitigate the issue, the data was accessible for some time, implying realized harm. Therefore, this event meets the criteria for an AI Incident.
Thumbnail Image

DeepSeek exposed sensitive data to open internet, says Israeli cybersecurity firm

2025-01-30
Asianet Newsable
Why's our monitor labelling this an incident or hazard?
The incident directly involves an AI system (DeepSeek's AI assistant) whose infrastructure exposed sensitive data, including user prompts, to the open internet. This exposure constitutes a breach of privacy and data protection laws, which are part of human rights and legal obligations. The harm has materialized as sensitive information was accessible publicly, even if remediated swiftly. The AI system's development and use are central to the incident, as the data relates to its operation. Hence, this is an AI Incident due to realized harm linked to the AI system's use and infrastructure security failure.
Thumbnail Image

Sensitive DeepSeek data exposed to web, cyber firm says - ET CISO

2025-01-30
ETCISO.in
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek's AI assistant) whose data was exposed due to a security lapse. Although no harm has been reported as having occurred, the exposure of sensitive data including user prompts and software keys could plausibly lead to harms such as privacy breaches or intellectual property violations if exploited. Since the harm is potential and not realized, this qualifies as an AI Hazard rather than an AI Incident. The quick remediation reduces immediate risk but does not negate the plausible future harm from the exposure.
Thumbnail Image

Sensitive DeepSeek data exposed to web: Cyber firm

2025-01-30
Al Arabiya English
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek's AI assistant) whose infrastructure was found to have a security vulnerability leading to the exposure of sensitive data, including user prompts and software keys. This constitutes a breach of confidentiality and potentially a violation of privacy rights, which falls under harm to individuals or groups. Although the data exposure was accidental and quickly fixed, the incident directly led to harm in terms of data exposure and privacy risk. Therefore, this qualifies as an AI Incident due to the realized harm linked to the AI system's use and deployment.
Thumbnail Image

Confidential data of Deep Seek users was discovered online

2025-01-30
oreanda-news.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek) whose user data, including AI chat query histories, was leaked due to a security lapse. The exposure of confidential data constitutes a violation of privacy and potentially user rights, which fits the definition of harm under AI Incident (c) regarding violations of rights. The AI system's development and use are directly linked to the incident because the data breach involves AI user data. Although the data was deleted quickly, the breach occurred and the harm (privacy violation) is realized. Therefore, this qualifies as an AI Incident.
Thumbnail Image

DeepSeek accidentally exposed sensitive data including user prompts, cyber firm reports

2025-01-30
Cryptopolitan
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek's AI assistant) whose data, including user prompts, was exposed unintentionally. This exposure directly led to a breach of user privacy and potential violation of rights, which fits the definition of harm (c) under AI Incident. The AI system's use and data handling led to this harm. Although the company acted quickly to remove the data, the harm from exposure already occurred. Therefore, this event is classified as an AI Incident.
Thumbnail Image

DeepSeek Data Leak: Cyber Firm Reports Data Exposed to Web

2025-01-30
TechWorm
Why's our monitor labelling this an incident or hazard?
The exposed database contained sensitive data generated and used by an AI system (DeepSeek's chatbot), including chat histories and secret keys, which are directly linked to the AI system's operation. The leak of personal information and operational secrets constitutes a violation of user privacy and data protection rights, fulfilling the criteria for harm to rights under the AI Incident definition. The involvement of the AI system is explicit, and the harm has occurred, not just a potential risk. Hence, this is classified as an AI Incident.
Thumbnail Image

US cybersecurity firm finds DeepSeek data exposed on open internet

2025-01-30
computing.co.uk
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system, DeepSeek's AI assistant, and its data infrastructure. The exposure of sensitive user data and system credentials constitutes a breach of privacy and potentially violates user rights, which aligns with harm category (c) - violations of human rights or breach of obligations protecting fundamental rights. The exposure could lead to misuse of data, unauthorized access, and further security incidents. Since the harm has already occurred (data exposure), this qualifies as an AI Incident rather than a hazard or complementary information. The event is not merely a product announcement or general news but a concrete incident involving AI system data exposure and associated harms.
Thumbnail Image

DeepSeek AI Exposed Over 1M Chat History Logs and API Keys

2025-01-30
CyberInsider
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek R1) whose user data and operational logs were exposed due to a security lapse. This exposure directly leads to harm in the form of privacy violations and potential misuse of sensitive information, which falls under violations of human rights and breach of obligations to protect user data. The AI system's development and use are implicated because the logs and API keys relate to its operation. The harm is realized, not just potential, as the data was publicly accessible. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

DeepSeek Data Leak Exposes 1,000,000 Sensitive Records

2025-02-02
Forbes
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek's AI-driven data analytics and machine learning platform) whose development and use led to a data leak exposing sensitive information. The leak directly harms individuals' privacy rights and potentially breaches data protection laws like GDPR and CCPA. The AI system's role is pivotal because the data exposed relates to AI operations and training data, and the incident stems from the AI company's data management practices. The harm is realized (data exposure), not just potential, so this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

DeepSeek data breach: A grim warning for AI security

2025-02-02
Digit
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek R1 chatbot) whose database was misconfigured, leading to a data breach exposing sensitive user data and operational secrets. This breach constitutes a violation of user rights and privacy, a recognized harm under the AI Incident definition. The AI system's vulnerabilities to cyberattacks also indicate malfunction or misuse risks. The harm is realized, not just potential, as sensitive data was exposed and regulatory investigations are underway. Hence, the event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Fallo de seguridad expone datos sensibles de usuarios de DeepSeek

2025-01-30
DiarioDigitalRD
Why's our monitor labelling this an incident or hazard?
The exposed database is linked to DeepSeek, an AI company, and contains sensitive user data generated or processed by its AI system. The lack of authentication and security measures allowed unauthorized access, directly harming users' privacy and security. This is a realized harm caused by the AI system's use and its inadequate protection, fitting the definition of an AI Incident due to violation of rights and harm to persons.
Thumbnail Image

Report: DeepSeek's chat histories and internal data were publicly exposed

2025-01-30
Ars Technica
Why's our monitor labelling this an incident or hazard?
The exposed database contained over a million chat histories and sensitive operational data from DeepSeek, an AI firm, indicating direct involvement of an AI system. The breach led to unauthorized access to personal and internal data, constituting harm to individuals' privacy and potentially violating data protection rights. The incident arose from the AI system's use and deployment with insufficient security, directly causing harm. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

DeepSeek: Riesgo de filtración de datos sensibles suena las alarmas a nivel mundial

2025-01-31
DiarioBitcoin
Why's our monitor labelling this an incident or hazard?
An AI system is clearly involved as DeepSeek is an AI company with a model whose data and system logs were exposed. The breach is a direct consequence of the AI system's development and operation environment failing to secure sensitive data. The exposure of private user data and internal AI model information constitutes a violation of privacy and potentially other rights, which fits the definition of harm under AI Incident (c) - violations of human rights or breach of obligations protecting fundamental rights. Since the harm has already occurred (data exposure and potential misuse), this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

DeepSeek Locked Down Public Database Access That Exposed Chat History

2025-01-30
TechRepublic
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek's generative AI) and a security breach exposing sensitive data such as chat histories and API secrets. This exposure directly relates to the use and deployment of the AI system and constitutes a violation of data privacy and security, which falls under harm to rights and potentially harm to communities. Although the breach was responsibly disclosed and mitigated before known malicious exploitation, the incident itself represents realized harm due to the exposure of sensitive information. Therefore, this qualifies as an AI Incident.
Thumbnail Image

Fuga de datos privados de los usuarios de DeepSeek: ¿qué ha pasado? - ON ECONOMIA

2025-01-30
En Blau
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (DeepSeek's AI chatbot) and a security incident where sensitive user data was exposed due to an unprotected database. This exposure constitutes a violation of privacy and potentially breaches legal obligations regarding data protection, fitting the definition of harm under (c) violations of human rights or breach of applicable law protecting fundamental rights. The incident has already occurred, with direct harm realized through data exposure. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Wiz Research locates, publishes information as to DeepSeek security flaw

2025-01-31
O'Grady's PowerPage
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek chatbot) whose development or use led to a data breach exposing sensitive user information and operational secrets. This exposure constitutes a violation of privacy rights and potentially national security, which falls under violations of human rights or breach of obligations under applicable law. Since the harm (privacy breach) has already occurred and investigations are underway, this qualifies as an AI Incident.
Thumbnail Image

Vulnerabilidades de DeepSeek exponen datos confidenciales

2025-01-30
es.theepochtimes.com
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system (a chatbot using a language model) whose database was exposed without authentication, leaking confidential user data and software keys. This exposure directly harms users by compromising their privacy and security, fulfilling the criteria for harm to persons and violation of rights. The involvement of the AI system is explicit, and the harm has materialized. The responsible disclosure and remediation efforts are noted but do not negate the incident classification. Hence, this event is an AI Incident.
Thumbnail Image

DeepSeek database left user data, chat histories exposed for anyone to see

2025-01-30
The Verge
Why's our monitor labelling this an incident or hazard?
The event involves an AI system's data (user chat histories and API keys) being exposed due to a security misconfiguration, which directly harms user privacy and security. The AI system's development and use are implicated because the exposed data includes AI chat histories and API keys. The exposure constitutes a violation of rights and potential harm to users, fulfilling the criteria for an AI Incident. The incident is not merely a potential hazard or complementary information, as the exposure has already occurred and the harm is realized or highly likely.
Thumbnail Image

DeepSeek Data Exposed to Web, Cybersecurity Firm Says

2025-01-30
NTD
Why's our monitor labelling this an incident or hazard?
The exposed database contained sensitive information related to the AI system DeepSeek, including user chat logs and software keys, which are integral to the AI's operation. The lack of security controls allowed unauthorized access, representing a malfunction or failure in the AI system's deployment environment. This led to direct harm by compromising user data privacy and security, fulfilling the criteria for an AI Incident under violations of rights and harm to users. The incident was realized and not merely a potential risk, and the AI system's involvement is explicit and central to the event.
Thumbnail Image

DeepSeek exposed chat history and other sensitive data

2025-01-30
9to5Mac
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek chatbot) whose operation and data management led to the exposure of sensitive user data and secret keys due to a security misconfiguration (lack of authentication on a database). This exposure constitutes a violation of privacy and potentially national security concerns, which are harms under the framework (violations of rights and harm to communities). The harm has already occurred, not just a potential risk, so it is an AI Incident rather than a hazard. The involvement of the AI system is explicit and central to the incident, as the data exposed includes chat histories generated by the AI system's use.
Thumbnail Image

Base de datos interna expuesta a Deepseek que contiene historiales de chat y datos confidenciales

2025-01-30
esdelatino.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Deepseek) whose backend database containing AI-generated chat histories and sensitive data was exposed due to misconfiguration. This exposure directly led to harm in the form of violation of confidentiality and potentially user rights, fulfilling the criteria for an AI Incident. Although no explicit malicious use or harm beyond exposure is reported, the leak of confidential user data and API keys is a clear harm under the framework's definition of violations of rights and harm to individuals.
Thumbnail Image

DeepSeek 'leaking' sensitive data: cybersecurity company says "within minutes, we found..." - The Times of India

2025-01-30
The Times of India
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek AI Assistant) whose database was publicly accessible, leading to the exposure of sensitive user data and operational secrets. This exposure is a violation of privacy and potentially human rights related to data protection. The harm has already occurred as sensitive data was leaked. The AI system's development and use directly led to this harm through a security vulnerability. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Security Researchers Uncover Major Data Exposure at DeepSeek AI

2025-01-31
Gadget Review
Why's our monitor labelling this an incident or hazard?
The exposed database contained user chat logs generated by the AI system and API secrets, indicating direct involvement of the AI system's data. The exposure of personal and sensitive data can be considered harm to individuals' privacy and potentially a violation of rights. The incident is directly linked to the AI system's use and data handling practices, fulfilling the criteria for an AI Incident. The quick remediation does not negate the fact that harm occurred due to the exposure.
Thumbnail Image

Deep trouble: Infosec firm finds a DeepSeek database 'completely open and unauthenticated' exposing chat history, API keys, and operational details

2025-01-31
pcgamer
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek's R1 AI model and its backend infrastructure) whose database was left completely open and unauthenticated, exposing sensitive information including chat histories and API keys. This exposure directly harms data privacy and security, which can be considered harm to property and potentially to individuals or organizations relying on the system. The involvement of multiple data regulators and security warnings from the US Navy and National Security Council further confirm the seriousness and realized nature of the harm. The AI system's insecure deployment and vulnerability to exploitation constitute an AI Incident as per the definitions, since the development and use of the AI system directly led to significant harm through data exposure and security risks.
Thumbnail Image

DeepSeek's Security Lapse Raises Red Flags for AI Adoption, Regulation - Techiexpert.com

2025-01-31
Techiexpert.com
Why's our monitor labelling this an incident or hazard?
The incident involves an AI system (DeepSeek's AI model and its user chat histories) and a security lapse that led to exposure of sensitive data, including personal user information and operational details. This exposure constitutes a violation of data privacy rights, which falls under violations of human rights or breach of obligations under applicable law protecting fundamental rights. The breach has already happened, so it is a realized harm, not just a potential risk. Although the breach was due to a security misconfiguration rather than AI malfunction, the AI system's data was directly involved and harmed. Hence, this is classified as an AI Incident.
Thumbnail Image

Un fallo de seguridad en DeepSeek deja datos sensibles expuestos

2025-01-31
esdelatino.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek's AI research platform) whose security flaw led to the exposure of sensitive data, including user chat histories and API secrets. This constitutes a breach of confidentiality and potentially a violation of privacy and data protection rights, which falls under harm to rights as defined in the framework. Since the exposure has already occurred and external actors could have accessed sensitive information, this is a realized harm directly linked to the AI system's malfunction (security failure). Therefore, this qualifies as an AI Incident.
Thumbnail Image

Descubren un grave fallo de seguridad en DeepSeek: tus datos quedaron al descubierto en internet

2025-01-31
Hipertextual
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek) whose use and deployment led to a direct harm: exposure of confidential user data and system information. This constitutes a violation of privacy and potentially legal rights related to data protection, fitting the definition of an AI Incident. The breach was caused by a security flaw in the AI system's infrastructure, and the harm has already occurred. Therefore, this is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Grave fallo de seguridad en DeepSeek expuso más de un millón de registros sin protección

2025-01-30
esdelatino.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Deepseek R1) whose internal database containing sensitive user data was exposed without password protection, allowing unauthorized access. This exposure directly led to harm in terms of violation of privacy rights and potential misuse of confidential information. The AI system's development and use are central to the incident, as the data relates to AI chat histories and API keys. Therefore, this qualifies as an AI Incident under the definitions provided, specifically under violations of human rights or breach of obligations intended to protect fundamental rights.
Thumbnail Image

La versión online de DeepSeek ha estado exponiendo públicamente chats de los usuarios, según Wiz. Esto es lo que sabemos

2025-01-30
xataka.com
Why's our monitor labelling this an incident or hazard?
The exposed database contained sensitive user chat data generated by the AI system DeepSeek, indicating direct involvement of the AI system in the harm. The unauthorized access to user conversations is a violation of privacy rights and data protection laws, fulfilling the criteria for harm under violations of human rights and legal obligations. The incident is not merely a potential risk but a realized breach, as external actors could access the data. Hence, it is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

ammonnews : Sensitive DeepSeek data exposed to web

2025-01-30
Ammon News
Why's our monitor labelling this an incident or hazard?
The exposed database contains sensitive information related to the AI system's operation and user interactions, indicating the involvement of an AI system. The exposure is due to a security lapse, not a malfunction of the AI itself, but the data leak could indirectly lead to harms such as privacy breaches or intellectual property violations. Since no actual harm is reported yet, but the risk is credible and significant, this qualifies as an AI Hazard rather than an AI Incident. The event is not merely complementary information because it reports a new security exposure with potential for harm.
Thumbnail Image

Wiz uncovers major DeepSeek data exposure | Ctech

2025-01-30
ctech
Why's our monitor labelling this an incident or hazard?
An AI system is involved as the exposed data includes user prompts from an AI assistant and backend operational details. The exposure of sensitive data constitutes a security breach that could lead to violations of privacy and intellectual property rights, which are harms under the AI Incident definition. However, since the data was secured promptly and no actual harm or misuse is reported, this event represents a plausible risk rather than a realized harm. Therefore, it qualifies as an AI Hazard rather than an AI Incident.
Thumbnail Image

Sensitive DeepSeek data exposed to web, cyber firm says

2025-01-30
RAPPLER
Why's our monitor labelling this an incident or hazard?
An AI system (DeepSeek's AI assistant) is involved as the exposed data includes user prompts to the AI. However, the incident concerns a data exposure due to a security misconfiguration rather than a malfunction or misuse of the AI system itself. There is no indication that the exposure directly caused harm such as injury, rights violations, or disruption. The data was secured promptly, and no harm is reported. Therefore, this event does not meet the criteria for an AI Incident or AI Hazard. It is best classified as Complementary Information because it provides context on AI ecosystem security and responses to a potential risk without documented harm.
Thumbnail Image

Sensitive DeepSeek data exposed to web, Israeli cyber firm says

2025-01-30
ThePrint
Why's our monitor labelling this an incident or hazard?
The exposed data includes chat logs from users interacting with an AI assistant, indicating the involvement of an AI system. The exposure of sensitive data such as software keys and user prompts constitutes a security breach that could lead to harm, such as privacy violations or unauthorized access. Since the article reports the exposure as having occurred but does not detail any realized harm or consequences, it qualifies as an AI Incident due to the direct link between the AI system's data exposure and potential harm to users' privacy and security.
Thumbnail Image

Sensitive DeepSeek data exposed to web, Israeli cyber firm says

2025-01-29
Yahoo Finance
Why's our monitor labelling this an incident or hazard?
The incident involves an AI system (DeepSeek's AI assistant) whose data, including user prompts, was inadvertently exposed to the open internet. This exposure of sensitive data constitutes a breach of confidentiality and potentially a violation of user privacy and intellectual property rights. Although the data was secured quickly and no direct harm is reported, the exposure itself is a realized harm related to the AI system's use and data management. Therefore, this qualifies as an AI Incident due to the direct link between the AI system's operation and the harm of data exposure.
Thumbnail Image

DeepSeek Exposed Database Leaks Sensitive Data

2025-01-30
Infosecurity Magazine
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek AI chatbot) whose database was exposed due to a security vulnerability, leaking sensitive data including chat histories and API keys. This exposure directly leads to harm in terms of privacy violations and potential breaches of data protection laws, which are violations of human rights and legal obligations. The AI system's development and use (its database infrastructure) directly led to this harm. The prompt securing of the vulnerability does not negate the fact that harm occurred. Hence, this is classified as an AI Incident.
Thumbnail Image

Sensitive DeepSeek Data Exposed Online | Silicon UK Tech News

2025-01-30
Silicon UK
Why's our monitor labelling this an incident or hazard?
The exposed database contained sensitive information directly related to an AI system (DeepSeek's chatbot), including user prompts and API tokens, which are critical for the system's operation and user privacy. The exposure allowed potential unauthorized access and privilege escalation, posing direct harm to users' data security and privacy rights. The incident involved the use and malfunction (inadequate security) of an AI system leading to realized harm. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

DeepSeek Exposed Millions of Sensitive Logs - TechNadu

2025-01-30
TechNadu
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek) and its infrastructure. The exposure of sensitive chat logs and API keys due to a misconfigured database directly leads to harm in terms of privacy violations and potential breaches of legal obligations protecting user data. The AI system's development and use are implicated because the logs and keys relate to its operation. The harm is realized, not just potential, as sensitive data was accessible to unauthenticated users. Therefore, this qualifies as an AI Incident under the definitions provided, specifically under violations of human rights or breach of obligations under applicable law (privacy rights).
Thumbnail Image

DeepSeek database contains chat history, internal secrets for anyone to see

2025-01-30
TweakTown
Why's our monitor labelling this an incident or hazard?
The event involves an AI company's system (DeepSeek) that stores chat history and API secrets, indicating the presence of an AI system. The exposure of secret keys and the ability to execute code without oversight represent a malfunction or misuse of the AI system's infrastructure. Although no direct harm such as injury or rights violations is reported, the exposure of sensitive data and potential for unauthorized control pose a significant risk of harm. Since the harm is not reported as having occurred but the risk is credible and plausible, this event qualifies as an AI Hazard rather than an AI Incident. It is not merely complementary information because the main focus is on the vulnerability and its potential consequences, not on responses or ecosystem context.
Thumbnail Image

DeepSeek's Database Might Have Been Leaked Exposing Chat History

2025-01-31
Gadgets 360
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek platform) whose backend database containing chat histories and secret keys was exposed due to a security vulnerability. This exposure could plausibly lead to violations of user privacy and intellectual property rights, as well as other harms if malicious actors exploit the data. Since the harm is not yet realized but the risk is credible and significant, this qualifies as an AI Hazard under the framework.
Thumbnail Image

Exposed DeepSeek Database Revealed Chat Prompts and Internal Data

2025-01-29
Wired
Why's our monitor labelling this an incident or hazard?
The exposed database contained user prompts and API keys related to the AI system DeepSeek, indicating direct involvement of an AI system. The event stems from a security failure in the use and management of the AI system's infrastructure, leading to unauthorized data exposure. This exposure constitutes a breach of obligations under applicable law intended to protect fundamental rights, including privacy and data protection, thus qualifying as an AI Incident. The harm is realized as sensitive user data was publicly accessible, even if the extent of malicious exploitation is unknown.
Thumbnail Image

Korea to question DeepSeek regarding data protection, security policies

2025-02-02
중앙일보
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system used for generating chatbot responses, involving personal data collection and processing. The exposure of sensitive data due to an unauthenticated database is a malfunction of the AI system's data management and security practices, directly leading to harm through privacy violations and potential misuse of data. The involvement of multiple data protection authorities and the blocking of the app in Italy confirm the materialization of harm. Therefore, this event qualifies as an AI Incident because the AI system's malfunction has directly led to violations of data protection rights and security risks to users.
Thumbnail Image

Risk & Repeat: DeepSeek security issues emerge | TechTarget

2025-01-30
TechTarget
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system (large language models) whose development and use have led to security incidents involving data exposure and attacks. The exposure of sensitive information such as chat histories and secret keys constitutes harm to property and potentially to users' privacy rights, which falls under violations of rights and harm to property. The malicious attacks and vulnerabilities represent realized harms linked to the AI system's use and security. Therefore, this qualifies as an AI Incident.
Thumbnail Image

New research reports find DeepSeek's models are easier to manipulate than U.S. counterparts

2025-01-31
Axios
Why's our monitor labelling this an incident or hazard?
The exposed database and the ability to manipulate DeepSeek's AI models to produce malicious content demonstrate a direct link between the AI system's vulnerabilities and potential harms such as cyberattacks, physical harm (Molotov cocktail instructions), and security risks. The researchers' findings highlight realized security failures and misuse of the AI system, which fits the definition of an AI Incident due to the direct or indirect harm caused or likely to be caused by the AI system's malfunction and misuse.
Thumbnail Image

DeepSeek Left Database With Chat History, Internal Secrets Out In Public For Anyone To Access

2025-01-29
Wccftech
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek) whose database containing sensitive user data and internal secrets was publicly accessible without authentication, allowing potential malicious actors to access and manipulate the system. This directly leads to harm in terms of privacy violations and potential misuse of personal and proprietary information. The AI system's development and use are central to the incident, as the exposed data includes chat histories and keys used for user identification. Therefore, this qualifies as an AI Incident due to realized harm from the AI system's malfunction or misconfiguration leading to data exposure and privacy breaches.
Thumbnail Image

DeepSeek database was 'completely open,' leaving chat logs out there for all to see

2025-01-30
Android Police
Why's our monitor labelling this an incident or hazard?
An AI system (DeepSeek AI) is explicitly involved, as it is an AI chat model with a database of chat logs and backend data. The event stems from a malfunction or security flaw in the AI system's data management, leading to exposure of sensitive information. This exposure constitutes a violation of privacy rights, which falls under violations of human rights or breach of obligations under applicable law protecting fundamental rights. Although no direct evidence of exploitation is mentioned, the exposure itself is a realized harm. Therefore, this qualifies as an AI Incident due to the direct link between the AI system's malfunction and harm to users' privacy rights.
Thumbnail Image

DeepSeek is Leaking Sensitive Information, Says a Report from a Cloud Security Firm

2025-01-30
Analytics India Magazine
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek) whose development and use have directly led to a data breach exposing sensitive personal and operational data. This breach constitutes a violation of privacy rights and data protection laws, which falls under harm category (c) - violations of human rights or breach of obligations under applicable law. The involvement of the AI system is explicit, and the harm is realized, as evidenced by bans, warnings, and regulatory investigations. Hence, the event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Data Exposure Undermines China's AI Sensation DeepSeek | Technology

2025-01-30
Devdiscourse
Why's our monitor labelling this an incident or hazard?
The exposed data includes user chat logs from an AI assistant, indicating the involvement of an AI system. The exposure of sensitive data is a security incident that could lead to violations of privacy rights or other harms, but the article states the data was secured quickly and does not report any actual harm or misuse. Since no direct or indirect harm has been confirmed, and the event concerns a data exposure that could plausibly lead to harm, it fits best as an AI Hazard rather than an AI Incident. The rapid securing of data and lack of reported consequences mean it is not Complementary Information or Unrelated.
Thumbnail Image

How long did it take to hack a million lines of DeepSeek chat history?

2025-01-30
Government Technology
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek chatbot) whose security was compromised, leading to unauthorized access to sensitive user data (chat history). This constitutes a violation of privacy and potentially breaches data protection rights, which falls under harm to groups of people and violation of rights. The harm has already occurred as data was accessed, making this an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Security Researchers Found A Big Hole In Deepseek's Security

2025-01-30
BruneiDirect
Why's our monitor labelling this an incident or hazard?
The exposed database is directly related to the AI system DeepSeek, which is a generative intelligence platform. The security hole allowed access to sensitive user data and system information, which constitutes a violation of privacy and could lead to harm if exploited. Although no actual harm has been reported yet, the vulnerability represents a credible risk of harm to users and the system's integrity. Therefore, this event qualifies as an AI Hazard because it plausibly could lead to an AI Incident involving harm to users' privacy and data security. It is not an AI Incident because no confirmed harm has occurred, nor is it merely Complementary Information or Unrelated, as the security exposure is a significant risk tied to the AI system's use and development.
Thumbnail Image

Critical Security Flaw in DeepSeek AI Exposed

2025-01-30
Digit
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek's AI chatbot) and details a security flaw in its supporting infrastructure that exposed sensitive data, including chat histories and API secrets. This exposure constitutes a violation of privacy and potentially other rights, which aligns with harm category (c) - violations of human rights or breach of obligations under applicable law protecting fundamental rights. The AI system's deployment and use directly led to this harm through the security flaw. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information, as the harm has already occurred and the AI system's role is pivotal.
Thumbnail Image

DeepSeek Security Scrutinized Amid Data Leaks, Jailbreaks

2025-01-30
The Cyber Express
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (DeepSeek) and details a security incident where an unauthenticated database exposure led to access to sensitive data, including chat histories and secret keys. This constitutes a direct harm to users' privacy and security, which falls under violations of rights and harm to property (data). The surge in fraud and phishing attempts exploiting the AI's popularity further indicates realized harm to individuals. Therefore, this event meets the criteria for an AI Incident due to the direct harm caused by the AI system's security failure and misuse.
Thumbnail Image

Sensible DeepSeek-Daten nach Angaben einer israelischen Cyber-Firma im Internet veröffentlicht

2025-01-30
MarketScreener
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek's AI assistant) whose development and use led to a data breach exposing sensitive information, including user prompts and software keys. This constitutes a violation of privacy and potentially intellectual property rights, which falls under harm category (c) - violations of human rights or breach of obligations under applicable law. Since the exposure occurred and caused harm through unauthorized data availability, this qualifies as an AI Incident. The rapid remediation does not negate the fact that harm occurred due to the AI system's use and data management practices.
Thumbnail Image

Datenleck bei DeepSeek: Mehr als eine Million Datensätze...

2025-01-30
Die Presse
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek's AI assistant) whose data, including sensitive user queries, was exposed due to a security lapse. This exposure of personal and usage data constitutes a violation of user privacy and possibly applicable data protection laws, which falls under violations of human rights or legal obligations protecting fundamental rights. The AI system's development and use led indirectly to this harm through the data leak. Therefore, this qualifies as an AI Incident.
Thumbnail Image

Deepseek: Datenleck bei chinesischem KI-Start-up entdeckt

2025-01-30
Handelsblatt
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Deepseek's AI assistant) whose data leak has directly led to exposure of sensitive user data and software keys. This constitutes a violation of privacy and potentially intellectual property rights, which falls under harm category (c) - violations of human rights or breach of obligations under applicable law. The leak is a realized harm, not just a potential risk, thus qualifying as an AI Incident.
Thumbnail Image

Der Börsen-Tag: Datenleck bei Deepseek entdeckt

2025-01-30
n-tv.de
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (a free AI assistant) whose data, including user queries, was accidentally exposed due to a security lapse. This exposure constitutes a violation of privacy, which falls under violations of human rights or breach of obligations intended to protect fundamental rights. The leak has already occurred, so this is a realized harm, making it an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Chatprotokolle waren öffentlich zugänglich: Datenleck bei Deepseek - das sind die Folgen für Nutzer

2025-01-30
RP ONLINE
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Deepseek, a ChatGPT competitor) whose app experienced a data leak exposing sensitive user chat logs. This is a direct harm to users' privacy and potentially a violation of rights protected by law. The leak was caused by the AI system's use and its security failure, leading to realized harm. Therefore, this qualifies as an AI Incident.
Thumbnail Image

Künstliche Intelligenz: Datenleck bei chinesischem KI-Startup DeepSeek entdeckt

2025-01-30
Handelsblatt
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek's AI assistant) whose data was leaked, including user queries, which are outputs and inputs related to the AI system. This leak directly leads to a violation of user privacy and potentially breaches data protection rights, which falls under violations of human rights or legal obligations protecting fundamental rights. Therefore, this qualifies as an AI Incident due to realized harm from the AI system's use and data management failure.
Thumbnail Image

DeepSeek: Gigantisches Datenleck bei China-KI

2025-01-30
COMPUTER BILD
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek) whose data, including user queries and software keys, was exposed due to a security failure. This exposure constitutes a violation of user privacy and possibly legal protections related to data security, which falls under violations of human rights or legal obligations. Therefore, this is an AI Incident as the AI system's use and data handling directly led to harm through the data breach.
Thumbnail Image

Datenleck bei chinesischem KI-Startup DeepSeek entdeckt

2025-01-30
Salzburger Nachrichten
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek's AI assistant) whose user data was exposed due to a security lapse, directly leading to harm in terms of privacy violations and potential misuse of sensitive information. This fits the definition of an AI Incident because the AI system's development and use led to a breach of obligations under applicable law protecting user data and privacy, which is a violation of rights. The harm is realized, not just potential, as the data was accessible publicly for some time.
Thumbnail Image

"War so einfach zu finden": Experten finden Datenleck bei Deepseek

2025-01-30
n-tv.de
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the Deepseek AI chat assistant) whose use led to a data breach exposing sensitive user data, including chat logs and software keys. This constitutes a violation of user privacy and potentially breaches legal protections for personal data, fitting the definition of an AI Incident under violations of human rights or breach of applicable law protecting fundamental rights. The harm has already occurred due to the exposure of sensitive data, even though the leak was quickly removed. Therefore, this event is classified as an AI Incident.
Thumbnail Image

DeepSeek逾100萬條資料暴露於網絡 資安公司:涵用戶發問內容 (20:01) - 20250130 - 兩岸

2025-01-30
明報新聞網 - 每日明報 daily news
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek's AI assistant) whose use led to the exposure of sensitive user data and software keys, which is a violation of data privacy and security. This harm falls under violations of rights and harm to property (digital assets). The exposure was real and materialized, not just a potential risk, thus qualifying as an AI Incident. The AI system's development and use directly contributed to the incident due to improper data protection measures.
Thumbnail Image

爆紅中國AI服務DeepSeek資料庫配置錯誤導致機密日誌外洩

2025-01-30
iThome
Why's our monitor labelling this an incident or hazard?
The incident involves an AI system (DeepSeek) whose database was misconfigured without authentication, allowing unauthorized access to sensitive data including AI chat logs and API keys. The exposure of confidential information constitutes harm to property and potentially violates privacy rights, fulfilling the criteria for an AI Incident. The AI system's development and use directly led to this harm through inadequate security controls. The event is not merely a potential risk but a realized data breach, so it is not an AI Hazard or Complementary Information.
Thumbnail Image

【CDT报告汇】网安公司Wiz调查发现DeepSeek数据库泄漏用户敏感信息(外二篇)

2025-02-03
中国数字时代
Why's our monitor labelling this an incident or hazard?
The DeepSeek database leak involves an AI startup's infrastructure that supports AI services, and the exposure of sensitive data due to lack of authentication is a direct harm linked to the AI system's use and security management. This meets the criteria for an AI Incident because the AI system's use and its infrastructure's misconfiguration directly led to harm (data breach and privacy violation). The other topics in the article do not involve AI systems or AI-related harm and thus are classified as unrelated. The report also includes a remediation action (DeepSeek fixed the vulnerability), but the primary event is the data breach itself, which is an incident, not merely complementary information.
Thumbnail Image

DeepSeek逾百萬數據外泄 多國應對資安風險 | 資安漏洞 | Wiz Research | 新唐人电视台

2025-02-01
NTDChinese
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (DeepSeek's AI model) and a security incident where sensitive data was leaked, including user conversations and API keys. This constitutes a direct harm to users' privacy and a violation of applicable data protection laws, fulfilling the criteria for an AI Incident under violations of human rights and legal obligations. The involvement of the AI system is clear, as the data leaked pertains to the AI service's operation and user interactions. The incident has already occurred and caused harm, not just a potential risk, so it is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

看光光!資安公司揭DeepSeek將用戶個資暴露在網路上 - 自由財經

2025-01-30
ec.ltn.com.tw
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek's AI assistant) whose user data and chat records were exposed due to a security vulnerability, leading to a direct breach of user privacy and data security. This constitutes a violation of fundamental rights and harm to property (user data). The exposure was unintentional but resulted from the AI system's development and deployment environment. Since actual harm (data exposure and privacy violation) occurred, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

資安公司:DeepSeek大量敏感資料暴露於網路 | 科技 | 中央社 CNA

2025-01-30
中央社 CNA
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek's AI assistant) whose use led to the collection and storage of sensitive user data. The inadvertent exposure of this data on an open network directly harms users by compromising their privacy and potentially violating legal protections. The involvement of the AI system in the data collection and the failure to secure the data properly constitutes a malfunction or misuse leading to harm. Since the harm (exposure of sensitive data) has already occurred, this is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

DeepSeek陷數據洩密危機 安全問題引發關注 過百萬日誌、密鑰曝光

2025-01-30
ezone.hk 即時科技生活
Why's our monitor labelling this an incident or hazard?
The incident involves an AI system (DeepSeek's AI service) whose backend data, including sensitive logs and API keys, was exposed due to a security misconfiguration. This exposure directly leads to harm by risking unauthorized access to private and operational data, which can violate privacy rights and potentially lead to further malicious actions. The AI system's development and use are central to the incident, as the data belongs to the AI service. Therefore, this qualifies as an AI Incident due to realized harm from the data breach linked to the AI system.
Thumbnail Image

資安公司:DeepSeek大量敏感資料暴露於網路| 台灣大紀元

2025-01-30
台灣大紀元
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek's AI assistant) whose backend data, including user interactions and sensitive operational data, was exposed due to a security misconfiguration. This exposure constitutes a violation of data privacy and potentially breaches legal obligations protecting personal data and user rights. The harm here is realized as sensitive user data was accessible publicly, posing risks to individuals' privacy and security. Therefore, this qualifies as an AI Incident because the AI system's use and its associated data handling directly led to harm through data exposure.
Thumbnail Image

資安公司:DeepSeek 大量敏感資料暴露網路

2025-01-31
TechNews 科技新報
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek's AI assistant) whose use led to the accidental exposure of sensitive user data and software keys, which is a violation of privacy and security rights. The exposure of such data can cause harm to individuals and organizations, fulfilling the criteria for harm under (c) violations of human rights or breach of obligations under applicable law. Although the data was quickly secured after notification, the harm of exposure occurred. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

資安存疑 各國警戒DeepSeek| 台灣大紀元

2025-02-02
台灣大紀元
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (DeepSeek's AI model) whose data, including user conversations and API keys, was leaked due to a security vulnerability. This breach has directly caused harm by exposing sensitive personal data, violating privacy rights, and raising national security concerns. The involvement of the AI system in the incident is clear, as the data breach stems from the AI service's infrastructure and operation. The harm is realized, not just potential, and multiple governments have taken action in response. Therefore, this qualifies as an AI Incident under the framework, as it involves violations of rights and harm to users and communities due to the AI system's malfunction or misuse.
Thumbnail Image

DeepSeek應用程序在美國未來如何 或被用戶狂刪

2025-02-03
The Epoch Times
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system (a generative AI application) whose use has resulted in direct harms such as exposure of sensitive personal data, privacy violations, and national security risks due to data being accessible to Chinese authorities and vulnerabilities exploited by attackers. Multiple countries have banned or restricted its use due to these harms. The article reports realized harms (data breaches, privacy violations) and credible national security risks linked to the AI system's development and use. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly or indirectly led to violations of rights and harm to communities and national security.
Thumbnail Image

DeepSeek应用程序在美国未来如何 或被用户狂删

2025-02-03
The Epoch Times
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system (a generative AI application) whose use has directly led to harms including privacy violations, data breaches, and national security risks. The article details actual data exposure incidents, bans by governments due to security concerns, and expert warnings about the AI system's risks. These constitute realized harms to individuals' privacy and potentially to national security, which falls under violations of rights and harm to communities. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

전 세계 기업·기관, 中 딥시크 접속 차단...정보 유출 공포

2025-01-31
Chosunbiz
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek chatbot) whose use has directly led to realized harms: privacy violations through unauthorized data collection and sharing, potential data leakage to a foreign government, and exploitation by malicious actors to distribute malware. These harms fall under violations of human rights and significant harm to communities and organizations. The blocking of access by numerous entities confirms the recognition of these harms. Hence, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

내 채팅 기록 다 털렸나..."딥시크 이용 정보 100만건 유출 정황"

2025-01-31
파이낸셜뉴스
Why's our monitor labelling this an incident or hazard?
The article explicitly describes a data breach involving an AI startup's database containing sensitive user data, including chat records and behavioral data used for AI training. The breach led to unauthorized access to over one million records, posing direct harm to users' privacy and potentially violating data protection rights. The AI system's involvement is clear as the data is collected and used for AI model training, and the breach stems from inadequate security controls around this AI-related data. This meets the criteria for an AI Incident because the harm (privacy violation and data exposure) has occurred and is directly linked to the AI system's data management.
Thumbnail Image

개인정보위, 中 딥시크 개인정보 처리과정 살핀다

2025-01-31
기술로 세상을 바꾸는 사람들의 놀이터
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system processing personal data, and its use has led to direct concerns and actions by multiple governments and data protection authorities due to potential violations of privacy rights and data protection laws. The blocking of access and official investigations indicate that harms related to privacy and legal rights have materialized or are ongoing. Therefore, this event qualifies as an AI Incident because the AI system's use has directly or indirectly led to violations of human rights and legal obligations concerning personal data protection.
Thumbnail Image

딥시크 정보 유출 우려..."정보 어디로 가는지 알 수 없어"

2025-01-31
아시아경제
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (DeepSeek's AI chatbot) whose use and data handling practices raise significant concerns about potential privacy violations and data leakage. Although no direct harm has been confirmed, the discovery of exposed databases and the widespread blocking of access by companies and government bodies indicate a plausible risk of harm to individuals' privacy and data security. This fits the definition of an AI Hazard, as the AI system's development and use could plausibly lead to violations of rights and harm to property (data). There is no indication that harm has already occurred, so it is not classified as an AI Incident. The article is not merely complementary information or unrelated, as it focuses on the potential risks and responses to the AI system.
Thumbnail Image

딥시크 정보유출 공포..."기업들, 딥시크 접속 차단"

2025-01-31
YTN
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (DeepSeek chatbot) whose use has raised significant concerns about potential data leakage and privacy violations, especially involving sensitive government and corporate data. The blocking of access by many organizations indicates recognition of a credible risk. However, the article does not report any actual realized harm or incident caused by the AI system, only the plausible risk and preventive measures taken. Hence, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

'중국에 정보 다 샐라' 美기업 딥시크 공포

2025-01-31
국민일보
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (DeepSeek AI chatbot) whose use is causing widespread concern about data leakage and privacy vulnerabilities. The blocking of access by governments and companies, investigations into semiconductor use, and regulatory scrutiny indicate recognition of credible risks. However, the article does not report confirmed incidents of harm such as actual data breaches or violations that have already occurred. The focus is on the plausible risk of harm due to the AI system's operation and data handling practices, fitting the definition of an AI Hazard rather than an AI Incident. The event is not merely complementary information because the main narrative centers on the potential for harm and the preventive actions taken, not on responses to a past incident.
Thumbnail Image

딥시크 정보 유출 공포...美 의회·국방부도 기능 제한 및 차단

2025-01-31
문화일보
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (DeepSea chatbot) whose use is being restricted due to concerns about data privacy and potential unauthorized data sharing with the Chinese government. The concerns relate to possible violations of privacy rights and data security, which are recognized harms under the framework. Since no actual data breach or harm is reported, but the risk is credible and has led to preventive actions, this fits the definition of an AI Hazard. The involvement is through the use of the AI system and the potential for harm is plausible and significant, justifying classification as an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

DeepSeek, ricercatori scoprono falla e perdita di dati sensibili - Software e App - Ansa.it

2025-01-30
Agenzia ANSA
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI software system, as indicated by the mention of AI and the context of the software. The security flaw allowed unauthorized access to sensitive data, including chat histories, access information, and developer API details. This constitutes a breach of obligations intended to protect fundamental rights, including privacy and data security. Even though it is not confirmed that data was stolen, the exposure itself represents a realized harm or at least a direct risk of harm. Therefore, this event qualifies as an AI Incident due to the direct or indirect harm caused by the AI system's vulnerability.
Thumbnail Image

DeepSeek avrebbe esposto per errore dati e messaggi degli utenti

2025-01-30
la Repubblica
Why's our monitor labelling this an incident or hazard?
The event describes a concrete data breach involving sensitive user information and API keys related to an AI language model system. The exposure of such data directly harms users' privacy and security, constituting a violation of rights under applicable laws. The AI system's deployment and use are central to the incident, as the leaked data includes chat messages and backend details tied to the AI service. The harm is realized, not just potential, and the cause is a misconfiguration in the AI system's data management. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

DeepSeek: database con dati sensibili accessibile a chiunque

2025-01-31
Punto Informatico
Why's our monitor labelling this an incident or hazard?
The incident involves an AI system (DeepSeek's AI platform) that processes private user data, including chat logs and API keys. The exposure of this sensitive data due to a security misconfiguration directly harms users' privacy and security, which falls under violations of human rights and harm to individuals. The AI system's development and use are implicated because the data handled by the AI was inadequately protected, leading to the incident. Therefore, this qualifies as an AI Incident due to the realized harm from the data breach linked to the AI system's operation and data management.
Thumbnail Image

DeepSeek, enorme falla di sicurezza: esposto online database con oltre 1 milione di chat e altri dati

2025-01-31
Hardware Upgrade
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (DeepSeek-R1 language model) and details a data breach exposing sensitive user chat histories and system credentials. This exposure directly harms users' privacy and potentially violates rights protected under applicable laws. The harm is realized, not just potential, as the data was publicly accessible without authentication. The AI system's use and data management practices are central to the incident. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

DeepSeek, ricercatori scoprono falla nella sicurezza: i dati (anche sensibili) di un milione di utenti esposti online

2025-01-31
Corriere della Sera
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system that processes chat data, implying AI involvement. The exposure of sensitive data including chat histories and API secrets constitutes a violation of privacy and potentially fundamental rights. The breach directly led to harm by exposing personal and sensitive information of one million users. Therefore, this qualifies as an AI Incident under the category of violations of human rights and harm to individuals due to the AI system's use and security failure.
Thumbnail Image

DeepSeek: cronologie delle chat accessibili a tutti

2025-01-30
Punto Informatico
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek's chatbot and its backend database) whose insecure deployment allowed unauthorized access to chat histories and sensitive data, violating privacy rights. This is a direct consequence of the AI system's use and configuration. The exposure of personal data constitutes a violation of human rights and privacy laws, fulfilling the criteria for an AI Incident. The fact that the vulnerability was corrected quickly does not negate the realized harm or the breach that occurred. Therefore, this event is classified as an AI Incident.
Thumbnail Image

Peneliti Temukan Celah Keamanan Serius di DeepSeek, Data Pengguna Terancam?

2025-02-01
Liputan 6
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek chatbot) whose development and deployment have led to a serious security flaw exposing sensitive user data. This exposure directly harms users' privacy and potentially violates data protection rights, fitting the definition of an AI Incident under violations of human rights or breach of obligations intended to protect fundamental rights. The involvement of the AI system is explicit, and the harm is realized through the data exposure, even if exploitation has not yet been confirmed.
Thumbnail Image

Waduh! Database DeepSeek Terekspos, Ungkap Riwayat Chat dan Data Sensitif

2025-01-31
detikinet
Why's our monitor labelling this an incident or hazard?
The exposed database belongs to DeepSeek, an AI system, and the leak of sensitive chat histories and API keys directly harms users' privacy and security, fulfilling the criteria for harm to persons and violation of rights. The incident stems from the AI system's use and infrastructure misconfiguration, leading to direct harm. Therefore, this is classified as an AI Incident.
Thumbnail Image

Database DeepSeek Berisi Informasi Sensitif dan Riwayat Obrolan Diduga Telah Bocor : Okezone Techno

2025-02-03
https://techno.okezone.com/
Why's our monitor labelling this an incident or hazard?
The incident involves an AI system's backend database being exposed publicly, containing sensitive user data and operational secrets. This exposure directly leads to harm risks such as privacy breaches and potential misuse of data, which are harms to persons and violations of rights. The AI system's operation is central to the incident, as the database is part of its infrastructure. Therefore, this qualifies as an AI Incident due to the realized exposure of sensitive information linked to the AI system's use and operation.
Thumbnail Image

China-KI DeepSeek: Datenschützer schlagen Alarm

2025-01-30
CHIP
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system (a machine learning-based chatbot) involved in data collection and processing. The article describes ongoing investigations into possible GDPR violations, which are legal compliance issues and potential privacy harms. However, no actual harm or confirmed violation has been established yet; the situation represents a plausible risk of harm (privacy violations) if the AI system's use continues without compliance. Therefore, this qualifies as an AI Hazard, as the AI system's use could plausibly lead to violations of rights under applicable law, but no incident has yet been confirmed.
Thumbnail Image

Südkoreas Datenschutzkommission wendet sich an Chinas DeepSeek

2025-02-03
world.kbs.co.kr
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (DeepSeek's AI model) and concerns about its development and use, specifically regarding personal data processing. However, no realized harm or incident is described; rather, there is a regulatory inquiry and warnings about potential risks. This fits the definition of an AI Hazard, as the use or development of the AI system could plausibly lead to an AI Incident involving data privacy violations, but no direct or indirect harm has yet occurred.
Thumbnail Image

DeepSeek sorgt international für Aufsehen. Aber nutzt Peking die künstliche Intelligenz für seine Zwecke?

2025-02-01
tagesschau.de
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system (a Chinese AI search application) whose use involves systematic censorship aligned with Chinese government policies, leading to biased and incomplete information dissemination. This censorship is a direct AI system behavior causing harm to communities by spreading distorted information and limiting access to truthful content, which can undermine democratic institutions and users' rights to accurate information. Furthermore, the article reports realized harms from data privacy violations, including data leaks and regulatory actions by data protection authorities, indicating breaches of legal obligations protecting personal data. These combined harms meet the criteria for an AI Incident, as the AI system's use has directly and indirectly led to violations of rights and harm to communities. The article does not merely warn of potential future harm but documents ongoing impacts and regulatory responses, confirming the incident classification.
Thumbnail Image

Der internationale DeepSeek-Ausschluss ist im Gange

2025-02-01
Gizmodo auf Deutsch | The Future Is Here
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek-R1) whose use has directly led to harm in the form of privacy violations and potential breaches of data protection laws, which are violations of fundamental rights. The article details actual investigations and enforcement actions, indicating that harm has occurred rather than just a potential risk. The involvement of multiple national data protection authorities and the blocking of the AI service in Italy further confirm the materialization of harm. Hence, the event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

DeepSeek: Energieverbrauch der China-KI wirft Fragen auf

2025-02-03
IT BOLTWISE® x Artificial Intelligence
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (DeepSeek) and focuses on its energy consumption during use, which is a significant environmental concern. However, it does not report any realized harm or incident caused by the AI system, nor does it describe a specific event where harm occurred or was narrowly avoided. Instead, it discusses potential future risks related to increased energy consumption if the approach is widely adopted, which could plausibly lead to environmental harm. Therefore, this qualifies as an AI Hazard because it highlights a credible risk of future harm stemming from the AI system's use and proliferation.
Thumbnail Image

DeepSeek: Der Aufstieg von Chinas Open-Source-KI inmitten von US-Regulierungsänderungen und Datenschutzbedenken

2025-02-03
uncut-news.ch
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system (generative AI) whose use has led to censorship and manipulation of information, restricting users' access to diverse perspectives and promoting state narratives, which is a violation of rights and harms communities. Additionally, the data privacy concerns linked to the platform's data collection and storage in China, where authorities can access data, represent a breach of privacy rights. These harms are ongoing and directly linked to the AI system's deployment and use. Hence, the event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

DeepSeek sorgt für Panik - Trump & Nvidia-Chef setzen auf harte Sanktionen!

2025-02-03
wallstreet:online
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (DeepSeek as an AI search engine and Nvidia's AI chips) and discusses potential security and privacy risks related to data collection and government access. These concerns could plausibly lead to harms such as violations of privacy rights or national security breaches. However, no actual harm or incident has been reported yet. The focus is on possible future risks and regulatory responses, fitting the definition of an AI Hazard rather than an AI Incident or Complementary Information. Therefore, the classification is AI Hazard.