Microsoft Bans DeepSeek Over Data and Propaganda Risks

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

At a US Senate hearing on AI competition, Microsoft President Brad Smith announced a ban on employees using DeepSeek’s application due to risks of personal data being sent to China and the propagation of Chinese propaganda. Tech leaders highlighted the security threat amid the race for global AI adoption.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly involves an AI system (DeepSeek) and discusses its development and use. The concerns raised relate to data privacy violations, propaganda dissemination, and national security risks, which align with potential harms under the AI Hazard definition. Since no actual harm or incident has been reported, but credible risks are identified and preventive actions are taken, this event fits the definition of an AI Hazard rather than an AI Incident. It is not Complementary Information because the main focus is on the risk and preventive ban, not on updates or responses to a past incident. It is not Unrelated because the AI system and its risks are central to the report.[AI generated]
AI principles
Privacy & data governanceRobustness & digital securityTransparency & explainabilityRespect of human rightsDemocracy & human autonomyAccountabilitySafety

Industries
Government, security, and defenceDigital securityMedia, social platforms, and marketing

Affected stakeholders
WorkersGeneral public

Harm types
Human or fundamental rightsPublic interestReputational

Severity
AI hazard

AI system task:
Organisation/recommendersContent generation


Articles about this incident or hazard

Thumbnail Image

DeepSeek AI模型存在2大風險 微軟禁止員工使用 - 國際 - 自由時報電子報

2025-05-09
Liberty Times Net
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (DeepSeek's AI model) whose use has led to realized harms: unauthorized data collection (privacy violation) and manipulation of content for propaganda, which can harm communities and violate rights. Microsoft's ban and government reports confirm the AI system's role in these harms. Hence, this is an AI Incident due to direct harm caused by the AI system's use.
Thumbnail Image

DeepSeek應用存兩大風險 微軟禁員工使用 | 新唐人电视台

2025-05-09
www.ntdtv.com
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (DeepSeek) and discusses its development and use. The concerns raised relate to data privacy violations, propaganda dissemination, and national security risks, which align with potential harms under the AI Hazard definition. Since no actual harm or incident has been reported, but credible risks are identified and preventive actions are taken, this event fits the definition of an AI Hazard rather than an AI Incident. It is not Complementary Information because the main focus is on the risk and preventive ban, not on updates or responses to a past incident. It is not Unrelated because the AI system and its risks are central to the report.
Thumbnail Image

微軟禁員工使用 Deepseek ,涉數據外洩及宣傳內容風險

2025-05-12
TechNews 科技新報 | 市場和業內人士關心的趨勢、內幕與新聞
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Deepseek's AI application) and concerns about its use leading to data leakage and propaganda dissemination, which could harm national security and AI autonomy. However, no actual harm or incident has been reported; rather, Microsoft has proactively banned its use internally, and the US Congress is discussing related risks and regulatory responses. Therefore, this is an AI Hazard, as the AI system's use could plausibly lead to significant harm, but no direct or indirect harm has yet occurred.
Thumbnail Image

【灣區新聞】Santa Clara市議會作決策時應用人工智能引發爭議 - KTSF.com

2025-05-12
KTSF
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (ChatGPT) in a public decision-making context, which could plausibly lead to harm if incorrect or misleading AI outputs influence important decisions. The article highlights concerns about AI's imperfections and the risk of public danger due to insufficient understanding and reliance on AI. Since no harm has yet occurred but there is a credible risk of future harm, this qualifies as an AI Hazard rather than an AI Incident. The formation of a working group to develop AI usage guidelines further supports the recognition of potential risks rather than realized harm.
Thumbnail Image

調查指僅4%台企資安成熟 逾9成1年來曾遇AI相關資安事件

2025-05-13
公共電視
Why's our monitor labelling this an incident or hazard?
The article explicitly states that a large majority of Taiwanese enterprises have encountered AI-related cybersecurity incidents, which implies direct or indirect harm to these organizations' security posture and operations. The presence of AI systems is clear given the context of AI-related cybersecurity events. The harm is realized, not just potential, as incidents have occurred. This fits the definition of an AI Incident because the development, use, or malfunction of AI systems has directly or indirectly led to harm (here, cybersecurity breaches or attacks). The article does not merely discuss potential risks or responses but reports on actual incidents, so it is not a hazard or complementary information.
Thumbnail Image

畢馬威報告:陸職場AI應用率高達93% 顯著高於全球平均水準

2025-05-12
中時新聞網
Why's our monitor labelling this an incident or hazard?
The article discusses survey results on AI use and trust in workplaces, noting risks like misuse and lack of output verification, but does not report any concrete harm or incident caused by AI. It also mentions governance challenges and training efforts, which are responses to AI adoption. Therefore, it fits the definition of Complementary Information, as it provides supporting data and context about AI use and governance without describing a specific AI Incident or Hazard.
Thumbnail Image

美科技大咖 籲廢除AI出口管制

2025-05-09
中時新聞網
Why's our monitor labelling this an incident or hazard?
The article involves AI systems in the context of AI infrastructure and export controls, but it does not describe any direct or indirect harm caused by AI system development, use, or malfunction. The discussion is about policy and governance responses to AI competition and export regulation, which is complementary information enhancing understanding of the AI ecosystem and governance landscape. There is no indication of an AI incident or hazard occurring or plausibly imminent from the content provided.
Thumbnail Image

Altman、蘇姿丰齊發聲!美國參議院AI聽證會聚焦AI發展 | Anue鉅亨 - 美股雷達

2025-05-10
Anue鉅亨
Why's our monitor labelling this an incident or hazard?
The article centers on a governmental hearing with industry leaders discussing AI development, infrastructure, and geopolitical competition. There is no mention of any AI system malfunction, misuse, or harm occurring or plausibly imminent. The focus is on strategic and policy considerations rather than any specific AI incident or hazard. Therefore, it fits the category of Complementary Information as it provides context and updates on AI governance and ecosystem developments without reporting a new incident or hazard.
Thumbnail Image

土耳其積極建構AI生態系統與制度,已啟動71項行動計畫-MoneyDJ理財網

2025-05-13
MoneyDJ理財網
Why's our monitor labelling this an incident or hazard?
The content outlines government plans, policies, and programs to promote AI development and governance in Turkey. There is no mention of any AI system malfunction, misuse, or harm occurring or plausibly imminent. The article is about ongoing AI ecosystem building and regulatory efforts, which fits the definition of Complementary Information as it provides context and updates on AI governance and development without describing an AI Incident or AI Hazard.
Thumbnail Image

蘇姿丰談美中AI競賽 指台灣將扮演重要角色

2025-05-09
公共電視
Why's our monitor labelling this an incident or hazard?
The article focuses on strategic and geopolitical aspects of AI technology competition and supply chains, without describing any AI system malfunction, misuse, or harm. It does not report any event where AI systems have directly or indirectly caused harm or where AI systems could plausibly lead to harm. The content is about industry and policy context, making it complementary information about the AI ecosystem rather than an incident or hazard.
Thumbnail Image

用AI損聲譽?|職場善用智能工具反招負評 - EJ Tech

2025-05-13
EJ Tech
Why's our monitor labelling this an incident or hazard?
The article centers on research findings about social biases and reputational harm perceptions linked to AI tool usage at work. While reputational harm is discussed, it is about social attitudes rather than an AI system causing harm directly or indirectly. No incident or hazard involving AI system malfunction or misuse is described. The article enhances understanding of societal implications of AI adoption but does not report a new AI Incident or AI Hazard. Hence, it fits the definition of Complementary Information.
Thumbnail Image

Soroti Masalah Keamanan, Microsoft Larang Karyawan Pakai DeepSeek : Okezone Techno

2025-05-11
https://techno.okezone.com/
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek) whose use is restricted by Microsoft due to plausible risks of harm, including data security breaches and propaganda influence. Although no direct harm has been reported, the concerns about data being stored in China under laws that may compel cooperation with intelligence agencies and censorship indicate a credible risk of harm. Microsoft's ban is a preventive action addressing these risks. Therefore, this qualifies as an AI Hazard, as the development and use of DeepSeek could plausibly lead to an AI Incident if these risks materialize.
Thumbnail Image

Karyawan Microsoft dilarang pakai DeepSeek

2025-05-10
ANTARA News - The Indonesian News Agency
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (DeepSeek) and discusses concerns about data security and propaganda, which are recognized harms under the framework (violations of rights and harm to communities). Microsoft banning its employees from using DeepSeek is a response to these plausible risks, indicating that harm has not yet occurred but could plausibly occur if the AI system were used. There is no indication of actual harm or incident resulting from DeepSeek's use by Microsoft employees. Hence, the event is best classified as an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because it clearly involves AI and potential harm.
Thumbnail Image

Microsoft Melarang Karyawannya Memakai DeepSeek

2025-05-11
Tempo Media
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek) and a corporate policy banning its use by employees. There is no indication that any harm has occurred or that the AI system has malfunctioned or been misused to cause harm. The announcement is a preventive governance measure addressing potential risks, but no realized or imminent harm is described. Therefore, this is Complementary Information about societal and governance responses to AI risks rather than an AI Incident or AI Hazard.
Thumbnail Image

Ini Sebab Microsoft Larang Karyawannya Pakai DeepSeek AI

2025-05-11
Tempo Media
Why's our monitor labelling this an incident or hazard?
The article describes Microsoft's preventive ban on the use of a specific AI application (DeepSeek) by its employees due to concerns about data theft, propaganda, and unsafe code, which are potential risks but no actual harm or incident has been reported. The adoption of a modified, safer version of the AI model within Microsoft's trusted platform is a governance and mitigation response. Since no realized harm or incident is described, but plausible risks are acknowledged and addressed, this event fits the definition of Complementary Information, as it provides context on corporate risk management and AI governance rather than reporting an AI Incident or AI Hazard.
Thumbnail Image

Karyawan Microsoft Dilarang Gunakan Aplikasi DeepSeek, Ini dia Alasannya - Radar Madura

2025-05-08
radarmadura.jawapos.com
Why's our monitor labelling this an incident or hazard?
DeepSeek is an AI-related application (likely involving AI for search or content processing) whose use raises significant concerns about data privacy and censorship, which could plausibly lead to violations of user rights and dissemination of propaganda. Microsoft’s preventive ban reflects recognition of these plausible harms. Since no actual harm has been reported yet, but the risks are credible and directly linked to the AI system's use, this qualifies as an AI Hazard. The article focuses on the potential risks and preventive measures rather than reporting an actual incident or harm, so it is not an AI Incident or Complementary Information.
Thumbnail Image

Microsoft Larang Pekerjanya Gunakan DeepSeek, Ini Alasannya | - Harianjogja.com

2025-05-12
Harianjogja.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions DeepSeek, an AI chatbot system, and the concerns about its data being stored in China under laws that may compel cooperation with intelligence agencies and censorship of sensitive topics. These factors create a plausible risk of harm related to privacy violations and propaganda dissemination. Microsoft’s decision to ban its employees from using DeepSeek is a response to these potential harms, indicating a credible risk rather than a realized incident. There is no indication that any actual harm has occurred yet, so it does not meet the criteria for an AI Incident. The event is not merely general AI news or a response to a past incident, so it is not Complementary Information. Hence, the classification as AI Hazard is appropriate.
Thumbnail Image

Microsoft Larang Karyawan Pakai DeepSeek

2025-05-12
KOMPAS.com
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system, DeepSeek, which is used as a chatbot and AI model. The concerns raised relate to data privacy and propaganda dissemination, which fall under violations of rights and harm to communities. Microsoft and the US government have taken preventive actions by banning or blocking DeepSeek usage internally, indicating recognition of plausible future harm. Since no actual harm or incident is reported, but credible risks are identified and acted upon, the event fits the definition of an AI Hazard rather than an AI Incident. It is not Complementary Information because the main focus is on the ban due to risk, not on updates or responses to a past incident. It is not Unrelated because the AI system and its risks are central to the event.
Thumbnail Image

Microsoft Melarang Karyawannya Pakai DeepSeek

2025-05-11
SINDOnews Tekno
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (DeepSeek) and discusses concerns about data security and propaganda influence, which could plausibly lead to harms such as misinformation or privacy violations. No actual harm is reported, only preventive actions by Microsoft. Hence, it fits the definition of an AI Hazard, as the event involves plausible future harm from the AI system's use. It is not Complementary Information because the main focus is on the risk and prohibition, not on updates or responses to a past incident. It is not an AI Incident because no realized harm is described.
Thumbnail Image

Karyawan Microsoft Dilarang Pakai Deepseek - Beritaja

2025-05-10
Beritaja.com
Why's our monitor labelling this an incident or hazard?
The article centers on Microsoft's decision to prohibit employee use of DeepSeek due to security and propaganda concerns linked to the AI system's data storage in China and censorship practices. While the AI system is clearly involved, and there are concerns about potential misuse or harmful influence, no actual harm or incident is reported. The event reflects a credible risk of harm (e.g., data security breaches, propaganda dissemination) that could plausibly occur if the AI system were used unrestricted. Hence, it fits the definition of an AI Hazard. The article does not primarily focus on a past incident or realized harm, nor is it merely complementary information about responses or ecosystem developments. Therefore, the classification is AI Hazard.
Thumbnail Image

微软首次公开:已禁止员工使用DeepSeek应用!

2025-05-09
驱动之家
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (DeepSeek) and discusses Microsoft's decision to ban its use internally due to risks related to data security, propaganda influence, and censorship, which are potential harms. However, no actual harm or incident caused by the AI system is reported. The focus is on risk mitigation, safety assessments, and governance responses to potential AI-related risks. Therefore, this event fits the definition of Complementary Information, as it provides supporting data and context about AI system risks and corporate governance responses without describing a new AI Incident or AI Hazard.
Thumbnail Image

微软禁止员工使用DeepSeek

2025-05-09
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
Microsoft's ban on DeepSeek usage by employees is based on concerns about data security vulnerabilities and the generation of potentially biased or propagandistic content. These concerns relate to the AI system's use and its potential to cause harm, such as data breaches or dissemination of misleading information. Since no actual harm or incident has been reported, but the risk is credible and has prompted preventive measures, this qualifies as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

微软总裁称公司内部已禁止员工使用DeepSeek应用 - cnBeta.COM 移动版

2025-05-08
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (DeepSeek) and discusses Microsoft's response to risks related to data privacy, censorship, and propaganda influence, which are potential harms associated with AI use. However, the article does not report any actual harm or incident caused by DeepSeek, nor does it describe a plausible imminent harm event. Instead, it details Microsoft's internal ban, risk assessment, and modification of the AI model to mitigate harmful effects. This fits the definition of Complementary Information, as it focuses on governance and mitigation measures rather than a new AI Incident or AI Hazard.
Thumbnail Image

微软总裁称公司内部已禁止员工使用DeepSeek应用_手机网易网

2025-05-09
m.163.com
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (DeepSeek) and discusses concerns about data security, censorship, and propaganda influence, which are potential harms related to human rights and informational integrity. Microsoft's ban on employee use and refusal to list the app in its store are preventive actions acknowledging these risks. Since no actual harm or incident is reported, but the risks are credible and plausible, this event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because it directly concerns AI system use and associated risks.
Thumbnail Image

Microsoft Blocks Chinese AI App Over Data and Propaganda Concerns | Law-Order

2025-05-08
Devdiscourse
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Deepseek's AI app) whose use has been prohibited due to concerns about data security and propaganda, which could plausibly lead to violations of rights or harm to communities if exploited. Since no actual harm has been reported yet, but the risks are credible and significant, this qualifies as an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

Microsoft Doesn't Allow Its Employees to Use China's Deepseek-President

2025-05-08
U.S. News & World Report
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Deepseek's AI app) and concerns about its use related to data vulnerability and propaganda. However, there is no indication that any harm has occurred or that the AI system has malfunctioned or been misused to cause harm. The focus is on preventive measures and risk mitigation by Microsoft, which constitutes a governance response rather than an incident or hazard. Therefore, this is best classified as Complementary Information, as it provides context on societal and corporate responses to AI risks without describing a realized or imminent harm.
Thumbnail Image

Microsoft Advocates That US Open Public Government Data For AI Training

2025-05-09
MediaPost
Why's our monitor labelling this an incident or hazard?
The article centers on policy advocacy and strategic considerations regarding AI data access and competition between nations. It does not report any actual AI-related harm, malfunction, or misuse that has occurred, nor does it describe a specific event where AI systems have caused or could plausibly cause harm. The mention of banning a Chinese AI app is a security and policy measure, not an incident or hazard involving AI harm. Therefore, this is best classified as Complementary Information, providing context and updates on AI governance and strategic responses.
Thumbnail Image

Microsoft doesn't allow its employees to use China's Deepseek, says boss

2025-05-09
Times LIVE
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Deepseek AI app) and concerns about its use leading to potential risks such as data vulnerability and propaganda dissemination. These concerns imply plausible future harm if the app were used, but there is no indication that harm has already occurred. Therefore, this qualifies as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Microsoft doesn't allow its employees to use China's Deepseek-President

2025-05-08
StreetInsider.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Deepseek) and concerns about its use due to data security and propaganda risks. However, there is no indication that any harm has occurred yet; rather, Microsoft is taking precautionary measures to prevent potential harm. This constitutes a societal and governance response to AI-related risks, enhancing understanding and management of AI ecosystem risks. Therefore, it fits the definition of Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

Por segurança, a Microsoft proíbe que os seus funcionários usem a IA da DeepSeek

2025-05-10
Pplware
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (DeepSeek) and discusses its use and potential risks, particularly regarding data privacy and influence. However, there is no indication that any harm has occurred due to the AI's development, use, or malfunction. The concerns are about plausible future risks, and the actions taken are preventive. Therefore, this event fits the definition of an AI Hazard, as it plausibly could lead to harm but no harm has yet materialized.
Thumbnail Image

Microsoft proíbe funcionários de usarem o DeepSeek

2025-05-09
TudoCelular.com
Why's our monitor labelling this an incident or hazard?
The article discusses Microsoft's internal policy to prohibit employee use of DeepSeek and the company's efforts to remove harmful side effects from the AI model before deployment. It also references government actions to restrict Chinese AI use. These points reflect responses to potential AI risks rather than an actual AI Incident or Hazard. No direct or indirect harm has occurred, nor is there a clear plausible future harm described. The focus is on mitigation and safer deployment, fitting the definition of Complementary Information.
Thumbnail Image

Nova lei na Microsoft: uma das principais IAs do mercado não pode ser usada

2025-05-09
Olhar Digital - O futuro passa primeiro aqui
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (DeepSeek) and concerns about its use and data security, which could plausibly lead to harms such as violations of privacy rights and censorship (human rights violations). However, the article does not report any actual harm or incident caused by the AI system's use or malfunction. Instead, it focuses on preventive measures and concerns expressed by Microsoft, making this an AI Hazard rather than an AI Incident. There is a plausible risk that the AI's data handling and censorship could lead to violations of rights or other harms if used widely or without safeguards.
Thumbnail Image

Funcionários desta big tech estão proibidos de usar o DeepSeek

2025-05-09
Olhar Digital - O futuro passa primeiro aqui
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (DeepSeek) and concerns about its data storage and privacy policies, which relate to potential risks. However, no actual harm or incident has occurred; the event is about a preventive corporate policy and concerns raised during a Senate hearing. This fits the definition of Complementary Information, as it details governance and risk management responses to AI-related concerns without describing a specific AI Incident or AI Hazard.
Thumbnail Image

Funcionários da Microsoft estão proibidos de usar o DeepSeek, diz presidente

2025-05-09
Estadão
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (DeepSeek chatbot) and discusses its use and deployment. The prohibition and security concerns indicate potential risks related to data security and propaganda, which could plausibly lead to harms such as violations of rights or harm to communities if the AI were misused or malfunctioned. However, no actual harm or incident has been reported, only preventive measures and risk management. Therefore, this qualifies as an AI Hazard, reflecting plausible future harm rather than a realized incident.
Thumbnail Image

Microsoft proíbe funcionários de usar a DeepSeek devido a riscos de segurança e propaganda | TugaTech

2025-05-09
TugaTech
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (DeepSeek) whose use has led to realized harms or significant risks: data privacy violations due to data storage in China and potential propaganda dissemination through AI-generated responses. Microsoft's prohibition of employee use and refusal to list the app in its store are responses to these harms. The AI model's modification to remove harmful side effects further confirms the presence of problematic AI outputs. These factors meet the criteria for an AI Incident, as the AI system's use has directly or indirectly led to violations of rights and harm to communities through propaganda and data security risks. The event is not merely a potential hazard or complementary information but a concrete incident prompting organizational action.
Thumbnail Image

Microsoft interdit DeeSeek à ses employés mais propose l'IA chinoise à ses clients

2025-05-09
Les Numériques
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the AI system DeepSeek and its use by Microsoft employees and clients. The concerns about data being stored in China and censorship represent plausible risks of harm, including violations of privacy and informational rights, which fit the definition of AI Hazard. There is no indication that any harm has yet occurred, so it is not an AI Incident. The event is more than just general AI news or a product update, as it involves concrete concerns about data privacy and censorship risks tied to the AI system's deployment and use. Hence, it is not Complementary Information or Unrelated. The classification as AI Hazard is appropriate because the event centers on potential harms that could plausibly arise from the AI system's use and data handling practices.
Thumbnail Image

Microsoft interdit l'application DeepSeek à ses employés~? évoquant des problèmes de sécurité des données et de propagande, tout en confirmant que Deepseek ne sera plus listé dans son magasin d'applications

2025-05-09
Developpez.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (DeepSeek) whose use has led to recognized harms related to data security and propaganda, which are forms of harm to individuals and communities. Microsoft’s ban and removal of the app from its store are responses to these harms but do not negate the fact that the AI system’s use has already caused or is causing significant risks. The Congressional report and Microsoft’s statements confirm that the AI system’s development and use have directly or indirectly led to violations of data security and potential misinformation, fitting the definition of an AI Incident. The event is not merely a potential risk (hazard) or a complementary information update but a concrete incident involving harm or risk of harm that has prompted corporate action.
Thumbnail Image

Microsoft interdit à ses employés d'utiliser l'application chinoise DeepSeek

2025-05-09
Boursier.com
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (DeepSeek) and concerns about its use, specifically regarding data security and propaganda content, which could plausibly lead to harms such as violations of privacy, misinformation, or influence operations. However, the article does not report any realized harm or incident caused by the AI system. Instead, it discusses preventive actions and risk management by Microsoft. Therefore, this event fits the definition of an AI Hazard, as it concerns plausible future harms from the AI system's use, but no direct or indirect harm has yet occurred.
Thumbnail Image

Microsoft interdit l'application DeepSeek à ses employés pour des raisons de sécurité et de propagande

2025-05-09
Fredzone
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (DeepSeek) and discusses its use and modification. The concerns raised relate to data privacy (user data stored in China under Chinese law) and content censorship potentially leading to propaganda dissemination. These concerns align with possible violations of rights and harm to communities if realized. However, the article focuses on Microsoft's internal policy decision to ban the app for employees and its efforts to modify the AI model to reduce harmful effects. There is no report of actual harm occurring, only potential risks and preventive actions. The event thus does not meet the threshold for an AI Incident (no realized harm) or an AI Hazard (no direct plausible future harm described beyond general concerns). Instead, it fits the definition of Complementary Information, as it details governance and security responses to AI risks and provides context on AI ecosystem developments.
Thumbnail Image

کارمندان مایکروسافت حق استفاده از دیپ‌سیک را ندارند

2025-05-09
روزنامه دنیای اقتصاد
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Deepfake application) and concerns about its use due to security and reputational risks. However, there is no indication that any harm has occurred or that there is a plausible imminent harm. The event is about a policy decision restricting use, which is a governance or organizational response to potential risks rather than an incident or hazard itself. Therefore, it qualifies as Complementary Information.
Thumbnail Image

کارمندان "مایکروسافت" حق ندارند از "دیپ‌سیک" استفاده کنند

2025-05-09
ایسنا
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek) and Microsoft's response to potential security, privacy, and harmful content risks associated with it. The ban on employee use and removal of harmful side effects from the AI model indicate concerns about plausible future harms. Since no actual harm or incident is reported, and the focus is on risk mitigation and policy decisions, this qualifies as Complementary Information rather than an AI Incident or Hazard.
Thumbnail Image

کارمندان "مایکروسافت" حق ندارند از "دیپ‌سیک" استفاده کنند

2025-05-09
انتخاب
Why's our monitor labelling this an incident or hazard?
The event involves the use and deployment of an AI system (DeepSeek's model) and highlights privacy and security concerns due to data storage and legal compliance in China. However, there is no direct or indirect harm reported as having occurred, only potential risks such as ad injection or unsafe code generation. The article mainly discusses policy decisions and potential risks rather than realized harm or incidents. Therefore, this is best classified as Complementary Information, providing context on AI system deployment, privacy concerns, and risk management.
Thumbnail Image

استفاده از هوش مصنوعی دیپ‌سیک برای کارمندان مایکروسافت ممنوع است

2025-05-09
جهان مانا - پایگاه خبری اطلاع رسانی
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek) and concerns its use and associated risks. Although no direct harm has occurred, the decision to ban its use by Microsoft employees is based on credible concerns about data security and political influence, which could plausibly lead to harms such as violations of privacy or political manipulation. The event is not a response or update to a past incident, nor is it unrelated general news. Therefore, it fits the definition of an AI Hazard, as it highlights a credible potential for harm from the AI system's use.
Thumbnail Image

ايتنا - مایکروسافت استفاده کارکنان از اپلیکیشن دیپ‌سیک را ممنوع کرد!

2025-05-11
جهان مانا - پایگاه خبری اطلاع رسانی
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (the DeepSeek chatbot) whose use is restricted due to concerns about data security and censorship, which implicate violations of user rights and potential harm. Although no direct harm is reported as having occurred, the concerns are about the AI system's use leading to violations of privacy and manipulation through propaganda, which are harms under the framework. Since the article describes a current ban due to these risks, it reflects a response to a recognized AI Incident or at least a recognized risk. However, as no actual harm or incident is reported to have occurred yet, and the focus is on the security concerns and preventive measures, this is best classified as an AI Hazard reflecting plausible future harm from the AI system's use.
Thumbnail Image

ايتنا - مایکروسافت استفاده کارکنان از اپلیکیشن دیپ‌سیک را ممنوع کرد!

2025-05-11
ايتنا - سایت خبری تحلیلی فناوری اطلاعات و ارتباطات
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek chatbot) whose use is restricted by Microsoft because of plausible risks related to data privacy violations and manipulation of information by state influence. However, no actual harm has been reported yet; the concerns are about potential future risks. Therefore, this qualifies as an AI Hazard, as the development and use of this AI system could plausibly lead to harms such as violations of privacy rights and misinformation influenced by government propaganda.
Thumbnail Image

مایکروسافت استفاده از هوش مصنوعی چینی DeepSeek را برای کارمندان خود ممنوع کرد

2025-05-09
ana.ir
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek) and concerns about its use due to security risks related to data privacy and potential government surveillance. However, the article does not report any actual harm or incident caused by the AI system; rather, it describes a preventive measure and concerns about plausible future risks. Therefore, this is best classified as Complementary Information, as it provides context and governance response to potential AI-related risks without describing a realized incident or direct harm.
Thumbnail Image

جمهور - DeepSeek برای کارکنان کمپنی مایکروسافت ممنوع شد

2025-05-10
خبرگزاری جمهور
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek and its AI model R1) and concerns about its use leading to potential harms such as privacy violations, censorship (a form of rights violation), and propaganda influence. However, the article describes a preventive measure (ban) by Microsoft to avoid these harms rather than an incident where harm has already occurred. The risks are plausible future harms related to data privacy, censorship, and propaganda, making this an AI Hazard. The article also discusses governance and risk mitigation actions, but the primary focus is on the potential risks rather than realized harm or a response to a past incident.
Thumbnail Image

مایکروسافت استفاده از هوش مصنوعی دیپ‌سیک را برای کارمندانش ممنوع کرد

2025-05-09
خبرگزاری ایلنا
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (DeepSeek AI) and concerns about data privacy and security risks related to its use. However, there is no indication that any actual harm has occurred yet; rather, the company is taking preventive measures to mitigate potential risks. This constitutes a plausible risk of harm (e.g., privacy violations or data misuse) but no realized harm is reported. Therefore, this is best classified as an AI Hazard, as the AI system's use could plausibly lead to harm, but no incident has occurred.
Thumbnail Image

نایب رئیس مایکروسافت: "ما به کارمندان خود اجازه استفاده از برنامه دیپ‌سیک را نمی‌دهیم"

2025-05-09
دیجیاتو
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (DeepSeek AI and its model R1) and discusses concerns about data privacy, censorship, and propaganda, which are potential harms related to human rights and information integrity. However, the article does not report any realized harm or incident caused by the AI system; rather, it highlights preventive actions, restrictions, and safety evaluations by Microsoft. Therefore, this event does not qualify as an AI Incident or AI Hazard but fits the definition of Complementary Information, as it provides context on governance responses and risk management related to AI systems.
Thumbnail Image

Microsoft заборонила працівникам користуватись китайським ШІ DeepSeek через ризики витоку даних

2025-05-10
InternetUA
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (DeepSeek) and concerns about data leakage and propaganda influence, which relate to potential violations of privacy and security (human rights and possibly harm to communities). However, the article does not report any actual harm occurring yet, only the plausible risk of harm due to data leakage and propaganda. Therefore, this situation constitutes an AI Hazard, as the development and use of DeepSeek could plausibly lead to an AI Incident involving data breaches or propaganda dissemination, but no direct harm has been reported so far.
Thumbnail Image

Microsoft забороняє своїм співробітникам користуватись чат-ботом китайської DeepSeek

2025-05-09
@ www.BIN.com.ua Business Information Network
Why's our monitor labelling this an incident or hazard?
The event describes Microsoft's restriction on using an AI chatbot due to data privacy and propaganda concerns, which are potential risks but no actual harm has been reported. The AI system (DeepSeek) is involved, and the concerns relate to plausible future harms such as data misuse or propaganda influence. Since no harm has materialized, this fits the definition of an AI Hazard rather than an Incident. The modification of the AI model to remove harmful side effects is a mitigation effort but does not indicate realized harm.
Thumbnail Image

Співробітникам Microsoft заборонено використовувати додаток DeepSeek

2025-05-10
InternetUA
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (DeepSeek) and discusses its use and potential harms related to data security, censorship, and propaganda. However, no direct or indirect harm has been reported as having occurred; instead, Microsoft has proactively banned its employees from using the app and modified the model to mitigate risks. This constitutes a governance and risk management response rather than an AI Incident or AI Hazard. The focus is on policy decisions, security concerns, and mitigation efforts, which aligns with the definition of Complementary Information.
Thumbnail Image

Працівникам Microsoft заборонили використовувати чатбот DeepSeek

2025-05-09
ms.detector.media
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek chatbot) whose use has led to concerns about data security and propaganda dissemination, which are forms of harm to communities and violations of rights. The ban by Microsoft and other organizations is a response to these harms. The article describes realized harms and direct involvement of the AI system in these harms, meeting the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Співробітникам Microsoft заборонено користуватися DeepSeek

2025-05-09
HiTech.Expert
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (DeepSeek) and discusses its use and associated risks, particularly data security and propaganda concerns. Microsoft's prohibition of employee use is a preventive measure based on plausible risks rather than a response to an incident where harm has already occurred. Therefore, this event fits the definition of an AI Hazard, as it highlights credible potential harms that could arise from the AI system's use or misuse, but no direct or indirect harm has been reported yet.
Thumbnail Image

У Microsoft відповіли, чому співробітникам компанії забороняється використовувати нейромережу DeepSeek

2025-05-10
http://kreschatic.kiev.ua/
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (DeepSeek chatbot) and its use by Microsoft employees. The ban is due to unresolved security and privacy issues, censorship, and propaganda risks, which are harms related to human rights and community harm. While no direct harm is reported as having occurred, the risks are credible and significant, justifying classification as an AI Hazard. The event does not describe an actual incident of harm but highlights plausible future harm from the AI system's use, fitting the definition of an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

Microsoft забороняє своїм співробітникам користуватись моделями китайської DeepSeek

2025-05-09
Mezha.Media
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek) and concerns about its use and data handling, which could plausibly lead to harm such as data breaches or propaganda influence. However, no actual harm or incident has been reported; rather, Microsoft is taking preventive actions and modifying the AI model to mitigate risks. Therefore, this is best classified as Complementary Information, as it provides updates on governance and risk management related to an AI system without describing a realized harm or incident.
Thumbnail Image

مايكروسوفت تحذر موظفيها من استخدام تطبيق DeepSeek ..ماذا حدث؟ - اليوم السابع

2025-05-09
اليوم السابع
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (DeepSeek) and concerns about its use leading to potential harms such as data security risks and propaganda dissemination. Microsoft’s ban on employee use and the mention of modifying the AI model to remove harmful side effects indicate recognition of these harms. Since the article describes actual concerns and restrictions due to realized or ongoing risks, this qualifies as an AI Incident rather than a mere hazard or complementary information. The harms relate to violations of data security and potential influence on information integrity, fitting the definition of harm to rights and communities.
Thumbnail Image

مايكروسوفت تحذر موظفيها من استخدام تطبيق DeepSeek... - الوكيل الإخباري

2025-05-10
الوكيل الاخباري
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek) and concerns about its use leading to potential harms such as data security breaches and propaganda influence, which could affect users and information integrity. However, the article describes a preventive measure (banning usage) rather than a realized harm or incident. Therefore, this is a governance response to a potential risk rather than an incident or direct hazard. It fits the definition of Complementary Information as it provides context on societal and organizational responses to AI-related risks.
Thumbnail Image

مايكروسوفت تمنع موظفيها من استخدام تطبيق ديب سيك الصيني

2025-05-10
صدى البلد
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (DeepSeek) and discusses Microsoft's preventive action to avoid potential harms related to data privacy and propaganda influence. These concerns represent plausible future harms that could arise from the use of the AI system, fitting the definition of an AI Hazard. There is no indication that any harm has already occurred, so it is not an AI Incident. The article is not merely complementary information about AI governance or responses but centers on the potential risks and the ban decision, which is a direct response to those risks. Hence, AI Hazard is the appropriate classification.
Thumbnail Image

مايكروسوفت تحظر DeepSeek على جميع موظفيها: تهديدات محتملة من الصين | صحيفة الخليج

2025-05-10
صحيفة الخليج
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepSeek) and concerns about its use leading to potential harm related to data privacy and security, which could constitute violations of rights or harm to property or communities if realized. However, the article describes a preventive ban by Microsoft to avoid such harms, with no indication that harm has already occurred. Therefore, this is an AI Hazard, as the AI system's use could plausibly lead to an AI Incident (data breaches, privacy violations), but no incident has yet materialized. The event is not merely complementary information because it reports a concrete action taken due to credible risk, nor is it unrelated since it directly concerns AI system use and associated risks.
Thumbnail Image

مايكروسوفت تحظر DeepSeek على جميع موظفيها: تهديدات محتملة من الصين - الخبر اليمني

2025-05-10
الخبر | خدمة إخبارية شاملة
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (DeepSeek) and discusses its use and the associated risks. The ban by Microsoft and other organizations is due to concerns about data security and potential unauthorized access by a foreign government, which could lead to violations of privacy and security. However, the article does not report any actual harm occurring from the use of DeepSeek, only the plausible risk of such harm. Therefore, this event fits the definition of an AI Hazard, as it concerns a credible potential for harm stemming from the AI system's use, but no incident has yet materialized.
Thumbnail Image

مايكروسوفت تحظر استخدام تطبيق DeepSeek

2025-05-10
elsiyasa.com
Why's our monitor labelling this an incident or hazard?
The article describes Microsoft's preventive action against potential risks associated with the AI system DeepSeek, including data security and propaganda concerns. However, it does not indicate that any actual harm has occurred. The modification of the AI model to remove negative effects is a mitigation measure. Therefore, this event represents a plausible risk scenario and a response to it, fitting the definition of Complementary Information rather than an AI Incident or Hazard.
Thumbnail Image

مايكروسوفت تمنع موظفيها من استخدام منصة DeepSeek

2025-05-11
Asharq News
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (DeepSeek) whose use has led to significant harms: privacy violations through extensive data collection and transfer to a foreign government, censorship and propaganda risks, and a major data breach compromising user data. These harms fall under violations of rights and harm to communities. Microsoft's ban and governmental restrictions reflect recognition of these harms. The AI system's use has directly or indirectly caused these harms, meeting the criteria for an AI Incident rather than a hazard or complementary information.