Texas and Taiwan ban Chinese AI chatbot DeepSeek over security concerns

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Texas Gov. Greg Abbott banned the Chinese AI chatbot DeepSeek and related apps on state government devices, citing risks of data collection and CCP infiltration of critical infrastructure. Taiwan’s Digital Development Department likewise barred DeepSeek usage for public agencies. The preventive measures sparked domestic and industry debate over security versus openness.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article involves AI systems (DeepSeek and other Chinese AI/social media apps) and concerns about their use leading to infiltration of critical infrastructure, which is a potential harm. However, the article does not report any actual harm or incident caused by these AI systems; rather, it reports a governmental ban as a precautionary measure to prevent possible future harm. Therefore, this event is best classified as an AI Hazard, reflecting a plausible future risk rather than a realized incident.[AI generated]
AI principles
Privacy & data governanceRobustness & digital securityRespect of human rightsDemocracy & human autonomyTransparency & explainabilityAccountability

Industries
Government, security, and defenceDigital securityIT infrastructure and hosting

Affected stakeholders
GovernmentGeneral public

Harm types
Human or fundamental rightsPublic interest

Severity
AI hazard

Business function:
Citizen/customer service

AI system task:
Interaction support/chatbotsContent generation


Articles about this incident or hazard

Thumbnail Image

德州禁用DeepSeek 開全美先例 | 美國新聞 | 國際 | 經濟日報

2025-02-02
Udnemoney聯合理財網
Why's our monitor labelling this an incident or hazard?
The ban is motivated by concerns that AI-powered data collection and social media apps from foreign adversaries could pose risks to critical infrastructure and state security. However, the event itself is a policy action to prevent potential harm rather than an incident where harm has already occurred or a direct malfunction or misuse of an AI system has led to harm. Therefore, this is a governance response addressing a plausible future risk, making it Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

頒全美首個DeepSeek禁令 德州州長:不允許中共用AI滲透基礎設施

2025-02-03
公共電視
Why's our monitor labelling this an incident or hazard?
The article involves AI systems (DeepSeek and other Chinese AI/social media apps) and concerns about their use leading to infiltration of critical infrastructure, which is a potential harm. However, the article does not report any actual harm or incident caused by these AI systems; rather, it reports a governmental ban as a precautionary measure to prevent possible future harm. Therefore, this event is best classified as an AI Hazard, reflecting a plausible future risk rather than a realized incident.
Thumbnail Image

德州禁政府設備使用DeepSeek 全美首例

2025-02-01
Yahoo News
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the AI model DeepSeek and the ban on its use by government devices due to concerns about infiltration of critical infrastructure by foreign AI systems. However, the event does not describe any actual harm or incident caused by the AI system, nor does it report any malfunction or misuse that has led to harm. Instead, it is a preventive measure reflecting a plausible risk of harm in the future. Therefore, this event qualifies as an AI Hazard because it involves the plausible future risk of harm from the use of an AI system, but no realized harm is reported.
Thumbnail Image

為防止被滲透 美國德州州長禁政府設備用DeepSeek

2025-02-02
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
DeepSeek is identified as an AI system developed by a Chinese AI research company, and its use on government devices is banned to prevent infiltration and data collection by a foreign adversary. The ban is a response to the plausible risk that the AI system could be used maliciously to harm critical infrastructure, personal data security, and intellectual property rights. Since no actual harm has occurred yet but the risk is credible and significant, this event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

美德州封殺DeepSeek、小紅書!政府設備一律禁用 - 國際 - 自由時報電子報

2025-02-01
Liberty Times Net
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (DeepSeek is an AI chatbot) and social media platforms with AI components. However, the event is about a government policy banning these AI systems on official devices to prevent potential security risks. There is no report of actual harm caused by these AI systems, only a precautionary measure to avoid possible future harm. Therefore, this is best classified as Complementary Information, as it relates to governance and societal response to AI-related risks rather than an AI Incident or AI Hazard.
Thumbnail Image

德州公部門 禁用DeepSeek | 聯合新聞網

2025-02-02
UDN
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-related data collection as a reason for the ban, indicating the involvement of AI systems in these apps. The ban is a preventive measure to protect critical infrastructure from potential foreign adversary threats via AI-enabled data collection. Since no actual harm has occurred but there is a credible risk of harm to critical infrastructure and state security, this event qualifies as an AI Hazard rather than an AI Incident. It is not merely general news because it concerns a government response to AI-related risks, but it does not report realized harm or incident.
Thumbnail Image

美德州頒令公務設備禁用DeepSeek和小紅書 | deepseek | 阿博特 | 中共 | 大紀元

2025-02-01
The Epoch Times
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek chatbot) whose use is being restricted due to concerns about data privacy and national security risks. However, the article does not report any actual harm or incident caused by the AI system; rather, it focuses on the plausible future risk of harm (e.g., data misuse, espionage, infiltration of critical infrastructure). Therefore, this is an AI Hazard, as the AI system's development and use could plausibly lead to harms such as violations of privacy, breaches of security, or disruption of critical infrastructure, but no direct or indirect harm has yet occurred according to the article.
Thumbnail Image

美德州颁令公务设备禁用DeepSeek和小红书 | deepseek | 阿博特 | 中共 | 大纪元

2025-02-01
The Epoch Times
Why's our monitor labelling this an incident or hazard?
The article involves AI systems (DeepSeek's chatbot and other AI/social media apps) and their use in government contexts. However, the event is about a governmental ban to prevent potential misuse and security risks, reflecting a plausible risk of harm rather than an actual realized harm. There is no report of injury, rights violations, or disruption caused by these AI systems so far. Therefore, this is best classified as an AI Hazard, as the administrative order addresses the plausible future harm these AI systems could cause if used on government devices, especially regarding data privacy and national security concerns.
Thumbnail Image

德州開美50州第一槍 禁政府機構用DeepSeek及小紅書 | 國際 | Newtalk新聞

2025-02-02
新頭殼 Newtalk
Why's our monitor labelling this an incident or hazard?
The article involves AI systems (DeepSeek is an AI model comparable to Western chatbots) and government concerns about their use leading to espionage and threats to critical infrastructure and data privacy. However, no actual harm or incident caused by these AI systems is reported; rather, the event is a policy action to prevent potential risks. Therefore, this is an AI Hazard, as the development and use of these AI systems could plausibly lead to harms such as espionage or data breaches affecting critical infrastructure, but no direct harm has yet occurred according to the article.
Thumbnail Image

【AI】美國德州下令公務設備禁用DeepSeek及小紅書

2025-02-03
ET Net
Why's our monitor labelling this an incident or hazard?
The article involves AI systems (DeepSeek and AI-powered social media apps) and their use on government devices. However, no actual harm or incident has been reported; rather, the ban is a preventive measure against plausible future harm such as data collection and infiltration of critical infrastructure. Therefore, this event is best classified as an AI Hazard, reflecting a credible risk that these AI systems could lead to harm if used within government infrastructure.
Thumbnail Image

得州開先例 禁政府機構用DeepSeek及小紅書 (00:01) - 20250203 - 國際

2025-02-02
明報新聞網 - 每日明報 daily news
Why's our monitor labelling this an incident or hazard?
The article involves AI systems (DeepSeek is an AI model, and RedNote is a social media platform likely using AI). However, no actual harm or incident has occurred yet; the ban is a precautionary action to prevent possible future harm such as data breaches or infiltration of critical infrastructure. Therefore, this event represents a plausible future risk (AI Hazard) rather than an incident or complementary information.
Thumbnail Image

州政府禁用DeepSeek 美國德州首開先例 | 新唐人电视台

2025-02-01
www.ntdtv.com
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (DeepSeek, an AI chatbot) and its use within government devices. The ban is motivated by concerns that the AI system could be used to collect data and potentially compromise critical infrastructure, which aligns with the definition of an AI Hazard—an event where the use or development of an AI system could plausibly lead to harm. Since no actual harm or incident has occurred yet, and the focus is on preventing potential risks, this event is best classified as an AI Hazard. The article also includes a governance response (the ban), but the primary focus is on the potential risk posed by the AI system, not on a past incident or complementary information about mitigation of an existing incident.
Thumbnail Image

2月3日國際聚焦 美德州及台灣禁止公務設備使用DeepSeek | 阿博特 | 卓榮泰 | 川普 | 新唐人电视台

2025-02-04
www.ntdtv.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek, an AI chatbot) whose use is prohibited on government devices to prevent potential security risks. The bans are based on concerns that the AI system could be used by the Chinese Communist Party to collect data and infiltrate critical infrastructure, which could plausibly lead to violations of rights or harm to critical infrastructure. Since no actual harm has been reported yet, but the risk is credible and the bans are preventive, this qualifies as an AI Hazard.
Thumbnail Image

德州開美50州第一槍 禁政府機構用DeepSeek及小紅書 | 國際 | 中央社 CNA

2025-02-02
Central News Agency
Why's our monitor labelling this an incident or hazard?
The article involves AI systems (DeepSeek and others) and their potential misuse for espionage and data collection by a foreign adversary, which could plausibly lead to harms such as violations of privacy, threats to critical infrastructure, and national security risks. The governor's ban is a governance action to mitigate these risks before harm occurs. Since no realized harm is described, but a credible risk is identified and addressed, this qualifies as an AI Hazard with complementary governance response aspects. However, the primary focus is on the potential threat and preventive ban, not on an incident or realized harm.
Thumbnail Image

美国德州禁止在政府设备使用DeepSeek和小红书

2025-02-01
app.myzaker.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (DeepSeek is an AI chatbot) and social media apps with AI components. The ban is a governance response to potential risks related to data security and foreign influence, aiming to prevent possible harm to critical infrastructure and privacy. However, the article does not report any realized harm or incident caused by these AI systems, only a preventive measure. Therefore, this is a societal/governance response providing complementary information about AI-related risk management rather than an AI Incident or AI Hazard.
Thumbnail Image

美国人下载DeepSeek,最高判20年监禁?美国下令全面封杀中国AI 法案引发争议

2025-02-04
中华网科技公司
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (DeepSeek is a Chinese AI chatbot) and the U.S. government's use and regulation of these AI systems. However, the article does not describe any direct or indirect harm caused by the AI system's development, use, or malfunction. Instead, it focuses on legislative and regulatory responses aimed at preventing potential risks associated with Chinese AI models. This fits the definition of Complementary Information, as it provides context on governance responses and societal reactions to AI technology without reporting a specific AI Incident or AI Hazard.
Thumbnail Image

美国新法案下载DeepSeek可判20年 AI禁令引发争议

2025-02-04
中华网科技公司
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (DeepSeek and other Chinese AI models) and their use, but the article primarily reports on legislative proposals, bans, and policy measures rather than describing any realized harm or direct incidents caused by the AI systems. There is no explicit mention of injury, rights violations, or other harms caused by the AI systems themselves. Instead, the focus is on potential risks and regulatory responses, which fits the definition of Complementary Information as it provides context and updates on governance responses to AI-related concerns. Therefore, this is not an AI Incident or AI Hazard but Complementary Information.
Thumbnail Image

美德州開第一例 政府設備禁用DeepSeek與小紅書(圖) - 新聞 美國 - 看中國新聞網 - 海外華人 歷史秘聞 時事 -

2025-02-03
看中国
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (DeepSeek chatbot) and AI-enabled social media platforms. The ban is due to concerns about data collection, potential foreign government access, and cybersecurity risks, which could plausibly lead to harm to critical infrastructure or privacy violations. Since no actual harm or incident has been reported, but credible risks and preventive actions are described, the event fits the definition of an AI Hazard rather than an AI Incident. The focus is on plausible future harm and preventive governance measures.
Thumbnail Image

德州禁政府設備使用DeepSeek 全美首例

2025-02-01
on.cc東網
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the AI model DeepSeek and the ban is motivated by concerns over infiltration and threats to critical infrastructure. However, no actual harm or incident caused by the AI system is reported; rather, the ban is a precautionary action to prevent possible future harm. Therefore, this event represents a plausible risk of harm from the AI system's use, qualifying it as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

DeepSeek | 美德州政府下封殺令 公務設備一律禁用DeepSeek小紅書

2025-02-01
std.stheadline.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (DeepSeek is an AI chatbot) and concerns about their use leading to potential harm to critical infrastructure and information security. However, the article describes a government ban as a precautionary action to prevent possible future harm rather than an incident where harm has already occurred. Therefore, this qualifies as an AI Hazard, as the AI systems' use could plausibly lead to an AI Incident involving security breaches or espionage, but no direct harm has been reported yet.
Thumbnail Image

开美50州第一枪 得州禁政府机构用DeepSeek和小红书

2025-02-02
東方網 馬來西亞東方日報
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (DeepSeek chatbot) and social media apps with AI components, and the ban is due to concerns about espionage and data security risks, which relate to critical infrastructure and personal data protection. No actual harm or incident is reported; the ban is a preventive action against plausible future harm. Hence, it fits the definition of an AI Hazard, not an AI Incident or Complementary Information. It is not unrelated because AI systems are central to the event.
Thumbnail Image

德州州长开先例 州政府设备禁用DeepSeek

2025-02-01
美国之音
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions DeepSeek as an AI chatbot developed by a Chinese startup, which is banned on Texas state government devices due to concerns about data collection and infiltration by a foreign adversary. The ban is a preventive action to protect critical infrastructure and personal data, indicating a plausible risk of harm rather than a realized harm. Since no direct or indirect harm has occurred yet, but the AI system's use could plausibly lead to harms, this fits the definition of an AI Hazard rather than an AI Incident. The event is not merely complementary information because it reports a concrete government action in response to potential AI risks, and it is not unrelated as it directly involves an AI system and its potential harms.
Thumbnail Image

DeepSeek崛起觸動國家層面神經 美國德州簽部門禁用令 荷蘭啟動私隱調查

2025-02-03
經濟一週
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI system involved in data collection, which has led to governmental actions such as bans and privacy investigations due to concerns about unauthorized use of data and potential infiltration of critical infrastructure. Although no direct harm has been reported, the event clearly indicates plausible future harm related to privacy violations and national security. Therefore, this qualifies as an AI Hazard because the development and use of DeepSeek could plausibly lead to an AI Incident involving violations of privacy rights and risks to critical infrastructure.
Thumbnail Image

德州開美50州第一槍!禁政府機構用「DeepSeek及小紅書」 | 國際 | 三立新聞網 SETN.COM

2025-02-02
三立新聞
Why's our monitor labelling this an incident or hazard?
The article involves AI systems (DeepSeek and social media apps with AI capabilities) and their potential misuse for espionage and data collection, which could plausibly lead to harm to critical infrastructure and privacy. However, no actual harm or incident has occurred yet; the event is about a government ban to mitigate potential risks. Therefore, this qualifies as an AI Hazard, reflecting a credible risk of future harm from the use of these AI systems in government contexts.
Thumbnail Image

德州州長開立先例州政府設備禁用DeepSeek

2025-02-01
美國之音
Why's our monitor labelling this an incident or hazard?
An AI system (DeepSeek AI chatbot) is explicitly mentioned, and its use on government devices is prohibited due to potential security and ethical risks. The event involves the use and potential misuse of AI systems, but no direct or indirect harm has occurred yet. The ban is a precautionary action to prevent plausible future harms such as data breaches or foreign interference, which aligns with the definition of an AI Hazard. Therefore, this event is best classified as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

"逢中必禁"?台灣跟風封殺DeepSeek引輿論反彈

2025-02-03
hkcna.hk
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (DeepSeek) and governmental actions banning its use citing security concerns, which implies a plausible risk of harm (data leakage, information security threats). However, there is no indication that any harm has actually occurred due to DeepSeek's use. The bans are preventive, reflecting a potential risk rather than a realized incident. The public and expert reactions provide context and critique of these measures. Therefore, this event fits the definition of an AI Hazard, as it concerns plausible future harm from the AI system's use, but no direct or indirect harm has been reported yet.
Thumbnail Image

德州開美 50 州第一槍,禁政府機構用 DeepSeek 及小紅書

2025-02-02
TechNews 科技新報
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (DeepSeek is an AI model comparable to Western chatbots) and their use within government agencies. Although no direct harm has been reported, the ban is based on the plausible risk that these AI systems could be used maliciously to harm critical infrastructure, steal intellectual property, or compromise personal data. Therefore, this constitutes an AI Hazard, as the development and use of these AI systems could plausibly lead to significant harm, but no incident has yet occurred.
Thumbnail Image

德州開全美先例,政府設備禁用DeepSeek-MoneyDJ理財網

2025-02-03
MoneyDJ理財網
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (DeepSeek-R1) and its use in financial and social media applications. The bans by Texas government and NASA, as well as investigations by European and Korean regulators, indicate concerns about potential misuse of AI for data collection and infiltration, which could plausibly lead to harms such as violations of privacy or critical infrastructure disruption. However, no direct or indirect harm has been reported as having occurred yet. Therefore, this event fits the definition of an AI Hazard (plausible future harm) and complementary information about regulatory responses. Since the main focus is on the bans and investigations as preventive and governance actions rather than a realized harm, the classification is AI Hazard.
Thumbnail Image

德州開美50州第一槍 禁政府機構用DeepSeek及小紅書

2025-02-02
Rti 中央廣播電臺
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (DeepSeek is an AI model, and the apps mentioned use AI for data collection and social media functions). The governor's ban is a governance action taken to prevent potential harm to critical infrastructure and data security from these AI systems. Since no actual harm has occurred but there is a credible risk of harm, this qualifies as an AI Hazard. It is not an AI Incident because no realized harm is described. It is not merely Complementary Information because the main focus is the ban as a preventive measure, not an update or response to a past incident. Therefore, the classification is AI Hazard.