Global Ban on DeepSeek AI App Due to Privacy Concerns

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

The AI app DeepSeek faces global bans due to privacy and security concerns, with fears of data breaches and misuse by Chinese authorities. Texas, NASA, and the US Navy have prohibited its use, citing risks of data collection and malware. The app's privacy policy is criticized as ineffective.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves an AI system (DeepSeek) whose use by government agencies is prohibited due to plausible risks related to information security and data privacy, which could lead to harm such as breaches of confidentiality or national security. However, the article does not report any actual harm or incident caused by the AI system, only the potential for such harm. Therefore, this qualifies as an AI Hazard, as the development and use of the AI system could plausibly lead to an AI Incident involving information security breaches or violations of rights, but no direct or indirect harm has yet materialized according to the article.[AI generated]
AI principles
Privacy & data governanceRobustness & digital securityTransparency & explainabilityAccountabilityRespect of human rightsSafety

Industries
Consumer servicesDigital securityGovernment, security, and defenceIT infrastructure and hosting

Affected stakeholders
ConsumersGovernment

Harm types
Human or fundamental rightsPublic interestEconomic/PropertyReputational

Severity
AI hazard


Articles about this incident or hazard

Thumbnail Image

Taiwan proíbe Governo de usar DeepSeek. UE quer mais informação

2025-02-01
ECO
Why's our monitor labelling this an incident or hazard?
The article’s primary narrative is about governments (Taiwan, Italy, South Korea, France, Ireland) implementing bans and information requests as a response to perceived security risks of an AI service. There is no specific harm event caused by the AI system, nor a direct or indirect incident of damage. Instead, it reports governance responses and oversight measures, fitting the definition of Complementary Information.
Thumbnail Image

Taiwan proíbe agências governamentais de utilizar aplicação chinesa DeepSeek - SAPO.pt - Última hora e notícias de hoje atualizadas ao minuto

2025-02-01
SAPO
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek) whose use by government agencies is prohibited due to plausible risks related to information security and data privacy, which could lead to harm such as breaches of confidentiality or national security. However, the article does not report any actual harm or incident caused by the AI system, only the potential for such harm. Therefore, this qualifies as an AI Hazard, as the development and use of the AI system could plausibly lead to an AI Incident involving information security breaches or violations of rights, but no direct or indirect harm has yet materialized according to the article.
Thumbnail Image

Singapura nega ligação a alegada compra ilegal de chips pela chinesa DeepSeek - SAPO.pt - Última hora e notícias de hoje atualizadas ao minuto

2025-02-01
SAPO
Why's our monitor labelling this an incident or hazard?
The article describes an AI system (DeepSeek) involved in activities that are under investigation for potentially illegal acquisition of advanced semiconductor chips, which could enable capabilities with national security implications. Multiple governments have banned or restricted the use of this AI system citing security risks. However, the article does not report any actual harm or incident caused by the AI system's use or malfunction. The focus is on the plausible future risks and regulatory responses. Hence, this qualifies as an AI Hazard due to the credible risk of harm related to national security and legal violations, but not an AI Incident since no harm has been confirmed or reported yet.
Thumbnail Image

Mas já? Rival da China proíbe DeepSeek para servidores públicos

2025-02-02
Olhar Digital - O futuro passa primeiro aqui
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (DeepSeek R1) and describes a government action banning its use in public and critical infrastructure sectors due to concerns about information security risks and data privacy. However, the article does not report any realized harm or incident caused by the AI system; rather, it focuses on the potential risks and preventive measures taken by Taiwan and other countries. Therefore, this event fits the definition of an AI Hazard, as the AI system's use could plausibly lead to harm (e.g., data leaks, national security breaches), but no direct or indirect harm has yet been reported.
Thumbnail Image

Singapura nega ligação a suposta compra ilegal de chips pela DeepSeek

2025-02-02
Notícias ao Minuto Brasil
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (DeepSeek's generative AI models) and discusses concerns about illegal chip procurement and national security risks, which could plausibly lead to harms such as violations of legal frameworks or security breaches. However, the harms are not reported as having occurred; rather, the article focuses on investigations, bans, and regulatory scrutiny as responses to potential risks. Therefore, this event fits the definition of an AI Hazard, as it plausibly could lead to an AI Incident but no realized harm is described.
Thumbnail Image

Inteligência Artificial: A guerra aberta entre os Estados Unidos e a chinesa DeepSeek - Forbes Portugal

2025-02-03
Forbes Portugal
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI generative model explicitly mentioned as being used and scrutinized. The bans on government use and investigations by multiple countries indicate that the AI system's use is linked to concerns about information security and national security, which fall under harm categories (c) violations of rights and (b) disruption of critical infrastructure. The article reports actual prohibitions and investigations, not just potential risks, indicating realized or ongoing harm. Hence, this qualifies as an AI Incident rather than a hazard or complementary information. The geopolitical and regulatory responses further support the classification as an incident due to direct or indirect harm caused by the AI system's deployment and use.
Thumbnail Image

DeepSeek sob escrutínio: que países restringiram ou estão a investigar a empresa chinesa de IA? - Executive Digest

2025-02-03
Executive Digest
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek-R1) whose use has led to concerns about data security and privacy, which are fundamental rights and national security issues. Several countries have already banned or restricted the AI system's use, indicating that the AI system's deployment is linked to potential or actual harm, such as violations of privacy and risks to critical infrastructure security. The article reports on realized governmental actions (bans and investigations) in response to these harms or risks. Since the bans and investigations are reactions to existing or imminent harms related to the AI system's use, this qualifies as an AI Incident rather than merely a hazard or complementary information. The harms include potential violations of rights and risks to critical infrastructure, fitting the AI Incident definition.
Thumbnail Image

전 세계 흔드는 중국 딥시크, 개인정보 유출 우려 증폭 - 매일경제

2025-02-03
mk.co.kr
Why's our monitor labelling this an incident or hazard?
DeepSeek’s core AI functionality—aggregating and processing personal search and behavioral data—could plausibly lead to large-scale privacy violations, industrial espionage, and cybersecurity incidents if misused or further exploited. The article focuses on these potential harms and preemptive bans rather than detailing a realized breach or post-incident remediation, making it an AI Hazard.
Thumbnail Image

보안 우려에...전세계 '딥시크 사용 금지' 물결 일어

2025-02-03
조선일보
Why's our monitor labelling this an incident or hazard?
The article describes recognized weaknesses in an AI system (DeepSeek) whose use could allow data exfiltration, spyware activity, or malware injection, prompting precautionary bans. No actual data breaches or harms have been reported yet, but the vulnerabilities create a credible risk of future harm—characteristic of an AI hazard.
Thumbnail Image

Researchers link DeepSeek's blockbuster chatbot to Chinese telecom banned from doing business in US

2025-02-05
Eagle-Tribune
Why's our monitor labelling this an incident or hazard?
Security researchers identified hidden scripts in the AI-powered chatbot’s login process that—if executed—would transmit user credentials to China Mobile, a state-owned company barred from U.S. operations. No actual data breach has been confirmed, but the code’s presence constitutes a credible threat of unauthorized data sharing and privacy violation, meeting the criteria for an AI Hazard.
Thumbnail Image

DeepSeek's Secret Code: Data Going to China?

2025-02-06
Analytics Insight
Why's our monitor labelling this an incident or hazard?
The event describes an AI system’s web interface actively exfiltrating user data to an unauthorized third party, constituting a realized violation of user privacy and legal obligations. This direct misuse of the AI system meets the criteria for an AI Incident.
Thumbnail Image

Experts reveal chilling link between DeepSeek AI bot & the Chinese government

2025-02-06
The Sun
Why's our monitor labelling this an incident or hazard?
The event describes an AI system—DeepSeek—whose use is directly leading to a breach of privacy rights by transmitting sensitive user data to a state actor. This constitutes a realized harm (violation of fundamental rights under applicable law) rather than a potential risk, meeting the criteria of an AI Incident.
Thumbnail Image

Study ties AI site to telecom banned in US | Northwest Arkansas Democrat-Gazette

2025-02-06
Northwest Arkansas Democrat Gazette
Why's our monitor labelling this an incident or hazard?
DeepSeek’s chatbot is an AI system, and the discovery of code linking its login process to a banned, state-owned telecom represents a credible future risk of privacy and national security harm. No actual data transfer was confirmed, so the event describes a plausible threat rather than a realized incident, fitting the AI Hazard definition.
Thumbnail Image

Is DeepSeek Sending Your Data to China's State-Run Telecom Firm?

2025-02-06
Outlook Business
Why's our monitor labelling this an incident or hazard?
DeepSeek’s chatbot is an AI system whose code has been found to transfer users’ login information to a foreign state-run telecom without consent. This constitutes a realized harm—violation of user privacy—and thus meets the criteria for an AI Incident under the category of human rights/privacy breach.
Thumbnail Image

Researchers say China's DeepSeek chatbot is linked to state telecom, raising data privacy concerns

2025-02-05
AP News
Why's our monitor labelling this an incident or hazard?
The article details a newly discovered mechanism in the AI system’s login process that could exfiltrate user data to a state-owned entity. While this represents a significant privacy and security concern, there is no confirmed evidence of actual data transfer or harm having occurred. Thus, it describes a plausible future risk rather than a realized incident.
Thumbnail Image

DeepSeek code may send U.S. user data straight to the Chinese government: report

2025-02-05
Yahoo Finance
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system whose browser‐based code clandestinely transmits sensitive personal data and digital fingerprints of U.S. users to a state‐controlled Chinese telecom registry. This unauthorized data exfiltration is a realized harm—specifically a breach of privacy and potential violation of users’ rights—directly caused by the AI system’s functionality.
Thumbnail Image

Researchers say China's DeepSeek chatbot is linked to state telecom, raising data privacy concerns

2025-02-05
Yahoo
Why's our monitor labelling this an incident or hazard?
The article describes a potential scenario in which an AI chatbot’s code could send sensitive user data to a state-linked entity, creating credible risks of privacy violation and espionage. No actual data transfer or harm has been confirmed, so this is a plausible future harm rather than a realized incident.
Thumbnail Image

DeepSeek source code can send data to Chinese company that America banned in 2019; called more dangerous than Tiktok - The Times of India

2025-02-05
The Times of India
Why's our monitor labelling this an incident or hazard?
An AI-powered system (DeepSeek chatbot) is implicated in code that could plausibly lead to significant harm (unauthorized data exfiltration to a banned state actor). No actual harm has yet been reported, but the described vulnerability creates a credible future risk, fitting the definition of an AI Hazard.
Thumbnail Image

DeepSeek is reportedly sending intricate user data to Chinese telecom despite US ban -- weeks after suffering a "large-scale cyberattack"

2025-02-05
Windows Central
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system whose website code directly captures users’ login data without their knowledge and likely transfers it to a company barred in the U.S. This constitutes unauthorized data collection and potential surveillance—violations of users’ privacy and national security obligations—so the incident is a realized harm from the development/use of an AI system.
Thumbnail Image

DeepSeek code may send U.S. user data straight to the Chinese government: report

2025-02-05
The Independent
Why's our monitor labelling this an incident or hazard?
The article describes a newly uncovered malicious capability in DeepSeek’s AI system that could lead to unauthorized data transfers and privacy violations but does not document confirmed data exfiltration. This represents a credible risk of harm (privacy breach, violation of rights) rather than a remediated incident or mere contextual update. Therefore, it qualifies as an AI Hazard.
Thumbnail Image

Researchers link DeepSeek's blockbuster chatbot to Chinese telecom banned from doing business in US

2025-02-06
The Virgin Islands Daily News
Why's our monitor labelling this an incident or hazard?
DeepSeek’s chatbot is an AI system handling sensitive user inputs. Researchers uncovered obfuscated scripts that could transmit login data to China Mobile, posing a credible future threat of privacy violation and national security harm. Since no actual data leakage has been confirmed but the potential for user data exfiltration by a hostile state actor exists, this qualifies as an AI Hazard under the framework (plausible future harm due to AI system design/use).
Thumbnail Image

Researchers link DeepSeek's blockbuster chatbot to Chinese telecom banned from doing business in US

2025-02-06
Owensboro Messenger-Inquirer
Why's our monitor labelling this an incident or hazard?
The event describes a discovered vulnerability in an AI-powered chatbot’s login process that ‘could send’ sensitive user information to a foreign state-owned carrier barred from U.S. operations. Because no confirmed breach or realized harm is reported, but the code’s presence creates a plausible pathway for privacy/data exfiltration, this qualifies as an AI Hazard.
Thumbnail Image

U.S. lawmakers move to ban China's DeepSeek from government devices

2025-02-06
NBC News
Why's our monitor labelling this an incident or hazard?
The article centres on a third-party security analysis revealing DeepSeek’s AI system ‘can’ send sensitive user credentials to a Chinese state-owned telecom, posing a credible espionage threat. No actual data theft incident is reported, but the described capabilities create a plausible risk of harm. Therefore, it constitutes an AI Hazard rather than a realized AI Incident or mere complementary information.
Thumbnail Image

DeepSeek Might Be Sharing Your Data With This Banned Chinese Company

2025-02-06
Gadgets 360
Why's our monitor labelling this an incident or hazard?
A cybersecurity firm uncovered code in DeepSeek’s web client enabling potential data transfer to a banned telecom operator with ties to the Chinese government. No confirmed data leakage has occurred, but the presence of this code introduces a credible risk of user privacy violations and national security threats. As the harm is not yet realized but is a plausible consequence of the AI system’s design, this event constitutes an AI Hazard.
Thumbnail Image

DeepSeek's advanced tracking technology 'never seen before'

2025-02-06
Yahoo Finance
Why's our monitor labelling this an incident or hazard?
DeepSeek’s AI system is explicitly described as tracking and transmitting sensitive user data to Chinese government–owned servers. While no incident has yet occurred, the functionality poses a credible risk of data leaks and espionage. This meets the definition of an AI Hazard—an AI system whose use could plausibly lead to direct or indirect harm.
Thumbnail Image

Researchers link DeepSeek's blockbuster chatbot to China Mobile ban in US

2025-02-07
MACAU DAILY TIMES 澳門每日時報
Why's our monitor labelling this an incident or hazard?
DeepSeek’s chatbot is clearly an AI system and the discovered code creates a credible risk of unauthorized data transmission to a state-linked entity. No actual data exfiltration harm has been documented yet, but the potential for user data to be siphoned to China Mobile constitutes a plausible pathway to a serious incident. Therefore, this scenario represents an AI Hazard.
Thumbnail Image

Chinese AI firm DeepSeek faces scrutiny over data privacy

2025-02-07
KOAT
Why's our monitor labelling this an incident or hazard?
DeepSeek’s AI system contains code that could channel sensitive user data to a sanctioned state‐owned enterprise, posing a credible threat of privacy and security violations even if no actual data transfer has been confirmed. Because harm has not yet been realized but is plausibly inducible by the AI system’s design, this qualifies as an AI Hazard.
Thumbnail Image

Researchers say China's DeepSeek chatbot is linked to state telecom, raising data privacy concerns

2025-02-05
The Independent
Why's our monitor labelling this an incident or hazard?
The article describes a generative AI system (DeepSeek chatbot) whose login process contains code that could send sensitive user data to a sanctioned, state-owned telecom. While no confirmed data transfers or breaches have been observed, the link represents a plausible pathway for unauthorized surveillance and privacy violations. Because harm has not yet been verified but could realistically occur, this constitutes an AI Hazard.
Thumbnail Image

Researchers say China's DeepSeek chatbot is linked to state telecom, raising data privacy concerns | FOX 28 Spokane

2025-02-05
FOX 28 Spokane
Why's our monitor labelling this an incident or hazard?
The article describes how the DeepSeek AI system’s login process could send user credentials to a state-owned telecom, posing a credible risk of data privacy breaches. No actual user data leak or harm has been confirmed, making this a potential threat rather than a realized incident.
Thumbnail Image

World News | Researchers Say China's DeepSeek Chatbot Linked to State Telecom, Raising Data Privacy Concerns | LatestLY

2025-02-05
LatestLY
Why's our monitor labelling this an incident or hazard?
DeepSeek’s chatbot is an AI system whose login code could plausibly send sensitive user data to a sanctioned state actor. While no confirmed data exfiltration or concrete harm has yet been observed, the exposed infrastructure link introduces a credible risk of user surveillance and intelligence gathering. This constitutes a potential (not yet realized) AI-driven harm scenario, classifying it as an AI Hazard.
Thumbnail Image

Researchers say China's DeepSeek chatbot is linked to state...

2025-02-05
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
DeepSeek’s AI chatbot and its login process involve an AI system (the chatbot platform) whose account creation/login code could plausibly send personal and proprietary user data to China Mobile, a barred state-owned telecom with alleged military ties. No actual data breach has been confirmed, but the potential for such a breach constitutes a clear future risk. Therefore, this event is best characterized as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

China's DeepSeek Chatbot Ties To State Telecom Spark Privacy Fears - Ny Breaking News

2025-02-05
NY Breaking News
Why's our monitor labelling this an incident or hazard?
This event involves the use of an AI system (a generative chatbot) whose deployment includes obscured scripts that could enable unauthorized data transfer to a state-owned entity. No confirmed data breach has occurred, but the described linkage creates a credible future risk of privacy and security harm, fitting the definition of an AI Hazard rather than an Incident or mere background information.
Thumbnail Image

Researchers say China's DeepSeek chatbot is linked to state telecom, raising data privacy concerns

2025-02-05
ABC Action News Tampa Bay (WFTS)
Why's our monitor labelling this an incident or hazard?
An AI system (DeepSeek chatbot) is implicated in potentially transferring user login and device fingerprinting data to a state-owned telecom linked to the Chinese state. Although actual harm has not been confirmed, the findings reveal a credible pathway for intelligence collection and privacy breaches. This constitutes a potential risk that could plausibly lead to an AI‐related incident, making it an AI Hazard.
Thumbnail Image

Researchers say China's DeepSeek chatbot is linked to state telecom, raising data privacy concerns

2025-02-05
Yahoo Finance
Why's our monitor labelling this an incident or hazard?
No actual data breach has been observed, but the obfuscated script creates a credible risk that user credentials and other login information could be sent to a foreign state‐owned entity without user consent. The event describes a plausible future privacy and security harm caused by the AI system’s design and infrastructure link, fitting the definition of an AI Hazard.
Thumbnail Image

DeepSeek coding has the capability to transfer users' data directly to the Chinese government

2025-02-05
ABC News
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system for search and interaction. Cybersecurity experts have uncovered intentionally hidden programming that could transfer Americans’ personal and web-activity data to China Mobile (a PRC government–owned registry), creating a covert surveillance channel. While no confirmed data exfiltration has been documented yet, the presence of an active back-door poses a clear risk of unauthorized data transfer, privacy violations, and national security harm if exploited. This is a plausible future harm scenario rather than a fully realized incident.
Thumbnail Image

Researchers say China's DeepSeek chatbot linked to state telecom, raising data privacy concerns Washington

2025-02-05
The Indian Express
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI-driven chatbot. The article reports a novel security vulnerability that could plausibly lead to privacy and national security harms, but does not document any actual data transfer or realized harm. This describes a credible potential risk stemming from the AI system’s design and deployment, making it an AI Hazard.
Thumbnail Image

DeepSeek chatbot linked to China's telecom firm that is barred from operating in United States, claim researchers

2025-02-05
The Economic Times
Why's our monitor labelling this an incident or hazard?
The article describes a plausible risk stemming from the use of an AI system (DeepSeek chatbot) that could lead to unauthorized data transfer and privacy violations. No confirmed data breach or harm has occurred, but the potential for exfiltration of sensitive user information to a sanctioned state-owned telecom constitutes an AI Hazard rather than an incident.
Thumbnail Image

Researchers say China's DeepSeek chatbot is linked to state telecom, raising data privacy concerns

2025-02-05
The Columbian
Why's our monitor labelling this an incident or hazard?
The article describes a newly revealed capability of DeepSeek’s AI chatbot to send user credentials to a state-owned entity with military ties. No actual breach is reported, but the code ‘could send’ sensitive data without users’ knowledge, creating a credible risk of privacy and security harm. This fits the definition of an AI Hazard—a circumstance where the use of an AI system could plausibly lead to an incident.
Thumbnail Image

DeepSeek's Hidden Code: Experts Warn Of Data Leak To Chinese Government

2025-02-05
NewsX World
Why's our monitor labelling this an incident or hazard?
DeepSeek is clearly an AI system, and the hidden data‐exfiltration capability constitutes a credible path to significant harm (privacy violations, national security threats). Because no confirmed leak or realized harm has yet been documented—but there is a plausible risk stemming from the AI’s design and code—the event is best classified as an AI Hazard.
Thumbnail Image

Researchers say China's DeepSeek chatbot is linked to state telecom, raising data privacy concerns

2025-02-05
NonStop Local Billings
Why's our monitor labelling this an incident or hazard?
The article describes an AI system (DeepSeek chatbot) whose account creation/login code could send sensitive user information to a Chinese state‐owned telecom linked to the military. No actual data breach has been confirmed, but the code’s presence creates a plausible risk of intelligence‐gathering and privacy violations. This represents a potential future harm rather than a realized incident.
Thumbnail Image

DeepSeek code can send user data directly to Chinese government: Report

2025-02-05
Straight Arrow News
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system and cybersecurity experts have discovered intentionally hidden code that could send sensitive user data to the Chinese government. Although no breach has been reported, the capability itself constitutes a plausible pathway to serious harm (national security compromise), so this qualifies as an AI Hazard.
Thumbnail Image

Yahoo Finance: U.S. Lawmakers Push to Ban China's DeepSeek AI Over Security Risks - Feroot Security Analysis

2025-02-07
Security Boulevard
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system (chatbot) embedding tracking and data‐exfiltration capabilities. The piece focuses on research findings revealing ongoing data collection and potential transfer of sensitive government queries to Chinese servers. While harm has not yet been publicly realized, these activities could plausibly lead to privacy violations and national security breaches. The lawmakers’ proposed ban is a governance response to this risk. Hence, this situation represents an AI Hazard.
Thumbnail Image

Yahoo Finance: U.S. Lawmakers Push to Ban China's DeepSeek AI Over Security Risks - Feroot Security Analysis - IT Security News

2025-02-07
IT Security News
Why's our monitor labelling this an incident or hazard?
No actual incident of data exfiltration or harm has materialized; the piece primarily reports on a governance action (proposed ban) in response to identified risks. This fits the definition of Complementary Information, as it covers a policy and security-analysis update rather than a new AI incident or hazard with realized harm.
Thumbnail Image

Researchers say DeepSeek chatbot is linked to China-owned telecom company that's banned in the US - The Times of India

2025-02-05
The Times of India
Why's our monitor labelling this an incident or hazard?
An AI system (DeepSeek chatbot) is involved, specifically a generative AI system used by many users. The event concerns the use of this AI system and a hidden code that may send sensitive user data to a banned foreign state-owned telecom company, which constitutes a violation of privacy and raises national security concerns. This is a direct or indirect harm to users' rights and potentially to national security (harm to communities or breach of obligations under applicable law). Although actual data transfer was not observed, the potential for such transfer and the presence of the code create a credible risk of harm. Since the article describes a current situation with potential realized harm (privacy breach and national security risk), this qualifies as an AI Incident rather than a mere hazard or complementary information.
Thumbnail Image

DeepSeek pushed to be banned from all US government-owned devices

2025-02-06
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (DeepSeek chatbot) and its use. The main issue is the potential for the AI system to collect and transmit sensitive data to a foreign government, which could lead to harms such as violations of privacy, national security breaches, and possibly harm to communities or individuals if sensitive information is exploited. Although there are strong concerns and governmental actions (bans and proposed legislation), the article does not report any confirmed incidents of data leakage or harm caused by the AI system so far. The cybersecurity researchers could not confirm data transfer to China Mobile in North America, and the concerns remain about plausible future harm. Hence, this event fits the definition of an AI Hazard rather than an AI Incident. The legislative and governmental responses are part of the hazard mitigation but do not themselves constitute complementary information since the main focus is on the potential harm from the AI system.
Thumbnail Image

Researchers say China's DeepSeek chatbot is linked to state telecom, raising data privacy concerns - Tech - Business

2025-02-05
جريدة الأهرام
Why's our monitor labelling this an incident or hazard?
An AI system (DeepSeek chatbot) is explicitly involved. The event concerns the use of the AI system and its infrastructure, which is linked to a state-owned entity with potential military ties. While no direct harm (such as confirmed data breaches or misuse) has been demonstrated, the plausible risk of significant harm to users' privacy, personal data, and national security is credible and clearly articulated. Therefore, this event fits the definition of an AI Hazard, as the development and use of the AI system could plausibly lead to an AI Incident involving violations of privacy and potential breaches of rights or security.
Thumbnail Image

Researchers say China's DeepSeek chatbot is linked to state telecom, raising data privacy concerns

2025-02-05
Market Beat
Why's our monitor labelling this an incident or hazard?
The DeepSeek chatbot is an AI system as it is a generative AI chatbot application. The event involves the use of this AI system and the discovery of code linking user login data to China Mobile, a state-owned Chinese telecom company with military ties. This connection raises concerns about unauthorized data sharing and potential privacy violations, which are harms related to human rights and data protection. Although no confirmed data transfer was observed, the plausible risk of such transfer and the sensitive nature of the data involved indicate a credible threat of harm. Therefore, this event qualifies as an AI Hazard because it describes a circumstance where the use of an AI system could plausibly lead to significant harm, but no confirmed harm has yet occurred according to the article.
Thumbnail Image

"Disturbing" new detail emerges about DeepSeek and what it does with your data

2025-02-05
Neowin
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (DeepSeek) whose use involves obfuscated code and AI-assisted analysis revealing covert data transfers to Chinese government-controlled entities. This data collection and tracking without user consent constitutes a violation of privacy and potentially national security laws, fulfilling the criteria for harm under violations of rights. The involvement of AI in both the system's operation and the analysis of its code is clear. The harm is realized as users' data is being collected and tracked, and governmental actions indicate recognition of this harm. Hence, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

DeepSeek's Security Vulnerability: Ties to China Mobile Unveiled | Technology

2025-02-05
Devdiscourse
Why's our monitor labelling this an incident or hazard?
DeepSeek's chatbot is an AI system, and its connection to China Mobile, a banned entity in the US for national security reasons, raises credible concerns about data privacy and potential misuse of user information. While no direct harm has been observed, the potential for such harm (exposure of sensitive data) is plausible and significant, fitting the definition of an AI Hazard. There is no indication that harm has already occurred, so it is not an AI Incident. The article focuses on the risk and scrutiny rather than a response or update, so it is not Complementary Information.
Thumbnail Image

Researchers say China's DeepSeek chatbot is linked to state telecom, raising data privacy concerns

2025-02-05
Toronto Sun
Why's our monitor labelling this an incident or hazard?
The article identifies an AI system (DeepSeek chatbot) and its connection to China Mobile, a state-owned entity with alleged military ties, raising concerns about data privacy. While this connection suggests potential risks, no actual harm or misuse is reported. The event is primarily an update revealing new contextual information about the AI system's infrastructure and privacy implications, without describing realized or imminent harm. Therefore, it fits the definition of Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

List of countries that have either restricted or banned the use of DeepSeek - South Asian Daily

2025-02-06
SouthAsianDaily.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek) whose use has been restricted or banned by several countries due to concerns about data security and privacy risks. These concerns reflect a plausible risk of harm (e.g., violations of privacy rights or security breaches) that could arise from the AI system's use. Since no actual harm or incident has been reported yet, but the risk is credible and has led to preventive actions, this qualifies as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Call to prohibit the use of DeepSeek on all devices owned by the US government due to concerns about the Chinese chatbot potentially gathering important information. - Internewscast Journal

2025-02-06
internewscast.com
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (DeepSeek chatbot) whose use on government devices is feared to lead to significant harm, including national security risks and privacy violations. The concerns stem from the AI system's data collection and transmission to Chinese state-owned infrastructure, which could be exploited for intelligence purposes. While no direct harm is reported as having occurred, the risk is credible and serious enough to prompt legislative action and bans by multiple governments. This fits the definition of an AI Hazard, as the AI system's use could plausibly lead to an AI Incident involving harm to rights and national security. It is not an AI Incident because the harm has not yet materialized, nor is it Complementary Information or Unrelated, as the focus is on the risk and regulatory response to the AI system's use and potential harm.
Thumbnail Image

انقلاب در هوش مصنوعی توسط مردی که جدی گرفته نمی‌شد

2025-02-02
ایسنا
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (DeepSeek's advanced AI chatbot) whose deployment has directly led to significant economic disruption (harm to property via stock market impact) and raises concerns about violations of privacy and potential political manipulation (harm to communities and human rights). The AI system's use is central to these harms, fulfilling the criteria for an AI Incident. Although some concerns are potential, the article also describes realized harms such as market value loss and data collection practices, which are direct consequences of the AI system's use. Hence, the classification as AI Incident is appropriate.
Thumbnail Image

برخورد اتمی با هوش مصنوعی

2025-02-03
ایسنا
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (DeepSeek) and discusses its use and the international response to it, including bans and restrictions due to security concerns. However, there is no indication that the AI system has caused any injury, rights violations, or other harms. The concerns are about potential future control and geopolitical power struggles, which are important but do not constitute a direct or indirect AI Incident or a plausible AI Hazard as defined. The content mainly provides background, warnings, and political context, fitting the definition of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

آمریکا استفاده نیروی دریایی از Deepseek را ممنوع کرد

2025-01-30
جهان مانا - پایگاه خبری اطلاع رسانی
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (DeepSeek) and the U.S. Navy's directive to ban its use due to potential security and ethical risks. No actual harm has been reported, but the concern and preventive action indicate a plausible risk of harm. The event is about the potential for harm rather than a realized incident, fitting the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

پنتاگون کارکنان خود را از استفاده از هوش مصنوعی DeepSeek منع کرد - تکفارس

2025-02-02
جهان مانا - پایگاه خبری اطلاع رسانی
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (DeepSeek) whose use by Pentagon employees has led to a security threat due to data being stored on servers under Chinese jurisdiction, potentially exposing sensitive information to foreign intelligence. This constitutes a violation of security protocols and a disruption to the management and operation of critical infrastructure (the Pentagon's information systems). The harm is realized as the Pentagon has taken immediate action to ban the AI system's use, indicating the threat is materialized. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

آمریکا استفاده نیروی دریایی از Deepseek را ممنوع کرد

2025-01-30
ana.ir
Why's our monitor labelling this an incident or hazard?
The U.S. Navy explicitly banned the use of the AI system DeepSeek citing potential security and ethical risks, indicating concern about plausible future harm. The AI system is involved as a technology whose use is restricted to avoid possible negative consequences. There is no report of actual harm or incidents caused by DeepSeek, only a precautionary ban. This fits the definition of an AI Hazard, as the event involves the use of an AI system that could plausibly lead to harm, but no harm has yet materialized.