Anthropic Accuses Chinese AI Firms of Mass Data Theft via Fake Accounts

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

US AI company Anthropic accused Chinese firms DeepSeek, Moonshot AI, and MiniMax of creating over 24,000 fake accounts to extract data from its Claude chatbot. The data, obtained through over 16 million interactions, was allegedly used to train competing AI models, violating Anthropic's terms and raising intellectual property and security concerns.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves the use of an AI system (Claude) and the alleged misuse of its outputs by other AI firms to train their own models without authorization. This constitutes a violation of intellectual property rights and legal obligations related to AI development and use. Since the harm (violation of rights) has already occurred through unauthorized access and use, this qualifies as an AI Incident under the framework.[AI generated]
AI principles
Privacy & data governanceRobustness & digital security

Industries
Digital security

Affected stakeholders
Business

Harm types
Economic/Property

Severity
AI incident

Business function:
Citizen/customer service

AI system task:
Interaction support/chatbotsContent generation

In other databases

Articles about this incident or hazard

Thumbnail Image

Anthropic Accuses DeepSeek, Other China-Based AI Firms of Free-Riding

2026-02-24
www.theepochtimes.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (Claude) and the alleged misuse of its outputs by other AI firms to train their own models without authorization. This constitutes a violation of intellectual property rights and legal obligations related to AI development and use. Since the harm (violation of rights) has already occurred through unauthorized access and use, this qualifies as an AI Incident under the framework.
Thumbnail Image

五角大廈向 Anthropic 下通牒,Claude 限制可能讓 2 億美元合約失效

2026-02-24
TechNews 科技新報 | 市場和業內人士關心的趨勢、內幕與新聞
Why's our monitor labelling this an incident or hazard?
The event centers on an AI system (Claude) explicitly mentioned as integrated into sensitive military operations, fulfilling the AI System criterion. The issue arises from the use and potential misuse of this AI system, with the DoD demanding fewer restrictions and Anthropic imposing ethical limits. The threat to a major contract and supply chain status indicates serious governance and operational stakes. No actual harm (injury, rights violations, or property/community/environmental damage) is reported as having occurred yet, but the potential for harm is significant, especially concerning autonomous weapons and mass surveillance. This fits the definition of an AI Hazard, as the event plausibly could lead to an AI Incident if restrictions are lifted or ignored. It is not Complementary Information because the main focus is not on responses or updates to past incidents but on an ongoing negotiation with potential future harm. It is not unrelated because the AI system and its use are central to the event.
Thumbnail Image

Empresas chinesas usaram Claude para melhorar seus próprios modelos, diz Anthropic

2026-02-23
uol.com.br
Why's our monitor labelling this an incident or hazard?
The event involves the misuse of an AI system (Claude) by companies creating fake accounts to generate interactions and improve their own models, which constitutes unauthorized use and likely intellectual property rights violations. The misuse is directly linked to the AI system's outputs and resources, leading to a breach of obligations protecting intellectual property rights. This fits the definition of an AI Incident as the AI system's use has indirectly led to a breach of applicable law protecting intellectual property rights.
Thumbnail Image

US AI giant accuses Chinese rivals of mass data theft

2026-02-23
The Guardian
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (Claude chatbot and the derived models) and describes the use of AI techniques (distillation) to illicitly extract capabilities, which is a misuse of AI development and use. The harm includes violation of intellectual property rights (a breach of obligations under applicable law) and potential risks to national security and safety due to the creation of AI models lacking safety guardrails. These harms have directly resulted from the AI systems' development and use, meeting the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Anthropic Accuses 3 Chinese Companies Of Mass AI Data Harvesting, Warns 'Window To Act' Narrow

2026-02-24
NDTV
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (Claude AI and rival chatbots) and describes the use and misuse of these AI systems in a way that directly leads to harm, specifically intellectual property violations through unauthorized data extraction and use. The fraudulent accounts and coordinated campaigns indicate deliberate misuse of AI systems to gain competitive advantage illicitly. This meets the criteria for an AI Incident because the development and use of AI systems have directly led to a breach of intellectual property rights, a recognized harm under the framework.
Thumbnail Image

Anthropic accuses Chinese AI labs of distilling Claude; Elon Musk calls it 'guilty'

2026-02-24
The Indian Express
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (Claude and derivative models) and their illicit use through distillation attacks, which is a misuse of AI capabilities. The harm includes violation of intellectual property rights and potential national security risks due to the proliferation of unprotected AI capabilities in military and surveillance contexts. These harms are direct and significant, meeting the criteria for an AI Incident. The involvement of AI is clear, the misuse is documented, and the harms are articulated, including legal and security implications. Although there is a broader debate about data ethics, the primary focus is on the illicit extraction and use of AI capabilities causing harm, not just potential or hypothetical risks, thus not merely a hazard or complementary information.
Thumbnail Image

Anthropic accuses Chinese AI labs of stealing data from Claude By Investing.com

2026-02-23
Investing.com
Why's our monitor labelling this an incident or hazard?
The event involves the use and misuse of AI systems, specifically the unauthorized extraction of data from an AI model to train competing models, which constitutes a violation of intellectual property rights (a breach of obligations under applicable law). The unauthorized use and potential deployment of distilled models lacking safeguards also create security risks, which can be considered harm to communities or property. Since the harm is occurring through illicit data extraction and unauthorized model training, this qualifies as an AI Incident. The involvement of AI systems is explicit, and the harm is direct and ongoing.
Thumbnail Image

24,000 fake accounts, 16 million prompts: Anthropic claims Chinese firms copied Claude AI

2026-02-24
MoneyControl
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (Claude AI and the derivative models) and their use in a manner that could plausibly lead to harm. The illicit distillation process bypasses safety guardrails, increasing the risk of misuse in strategic and harmful ways. While no direct harm has been documented, the potential for significant future harm is credible and clearly articulated, fitting the definition of an AI Hazard rather than an Incident or Complementary Information. The geopolitical and security implications further underscore the plausible risk of harm.
Thumbnail Image

Modell abgekupfert?: Anthropict wirft Konkurrenz aus China KI-Spionage vor

2026-02-23
N-tv
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (large language models like Claude) and their misuse (using fake accounts to extract knowledge and train competing models). The alleged activity infringes on intellectual property rights and may indirectly contribute to human rights concerns due to censorship compliance. However, the article does not describe any actual harm or incident resulting from this misuse, only the accusation and potential implications. Thus, it fits the definition of an AI Hazard, as the misuse could plausibly lead to AI incidents such as intellectual property violations or censorship-related harms, but no incident has yet materialized according to the report.
Thumbnail Image

Anthropic accuses DeepSeek and 2 other Chinese AI models of stealing its data, people on internet say it is fair

2026-02-24
India Today
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (Claude and Chinese AI models) and discusses their development and use. The core issue is alleged unauthorized data extraction (illicit distillation) which constitutes a violation of intellectual property rights, a recognized form of AI harm. However, the article does not report that this has led to a legal ruling, enforcement action, or direct harm such as injury, disruption, or breach of law enforcement. Instead, it focuses on accusations, public reactions, and geopolitical tensions. There is no indication that the illicit distillation has caused realized harm beyond the allegation itself, nor that it has plausibly led to immediate physical or systemic harm. The event is thus not an AI Incident or AI Hazard but rather a significant update and context on AI ecosystem conflicts and legal disputes, fitting the definition of Complementary Information.
Thumbnail Image

Empresas chinesas usaram Claude para melhorar seus próprios modelos, diz Anthropic

2026-02-23
Terra
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use and misuse of an AI system (Claude) by other AI companies to improve their own models illicitly. The misuse includes creating fake accounts and interactions to extract resources, which is a direct violation of terms and likely intellectual property rights. The harm is realized as unauthorized use and potential breach of legal protections related to AI model training and resource usage. The involvement of AI systems is clear, and the misuse has already occurred, leading to direct harm to the original AI system provider. Hence, this fits the definition of an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Chine, Pentagone et " vieille tech " : Anthropic à l'épicentre de la révolution de l'IA

2026-02-24
Le Monde.fr
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (Anthropic's Claude and Chinese competitors' models) and details direct and indirect harms: data theft and unauthorized use of AI capabilities (intellectual property violation), national security risks from AI-enabled military and surveillance applications (potential harm to human rights and security), and economic disruption affecting established companies. The ethical conflict with the Pentagon over lethal AI use and surveillance further indicates potential or ongoing violations of rights and legal frameworks. These factors meet the criteria for an AI Incident due to realized harms and ongoing risks directly linked to AI system development and use.
Thumbnail Image

Anthropic Accuses Chinese Companies of Siphoning Data From Claude

2026-02-23
The Wall Street Journal
Why's our monitor labelling this an incident or hazard?
The event clearly involves AI systems and their misuse (use of fraudulent accounts to extract data from an AI model). The misuse indirectly leads to harm in the form of intellectual property rights violations and potential national security risks, which fall under violations of obligations intended to protect intellectual property and possibly harm to communities or national security. Since the harm is ongoing and the article emphasizes the risk and misuse rather than a concrete incident with realized harm (e.g., no reported injury, no legal ruling yet), this qualifies as an AI Incident due to the direct misuse causing harm (intellectual property theft and security concerns). The involvement of AI systems and the misuse leading to harm is explicit and central to the event.
Thumbnail Image

Anthropic says DeepSeek and other Chinese AI companies fraudulently used Claude

2026-02-23
Business Insider
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (Claude and competing AI models) and describes the misuse of AI outputs (distillation attacks) that have directly led to violations of intellectual property rights and pose security risks that could harm communities or broader society. The fraudulent use of Claude's outputs by competitors constitutes a breach of legal and ethical obligations, and the potential for these less safeguarded models to be used maliciously (e.g., bioweapons) indicates significant harm. Anthropic's detailed disclosure of the scale and impact of these attacks confirms realized harm, not just potential. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

OpenAI, Anthropic accuse Chinese rivals of mass AI data theft

2026-02-24
The Hindu
Why's our monitor labelling this an incident or hazard?
The article explicitly describes how Chinese AI firms used AI systems to illicitly extract capabilities from Anthropic's Claude chatbot, constituting industrial-scale intellectual property theft. This is a direct violation of intellectual property rights, one of the harms outlined in the AI Incident definition. Additionally, the article highlights the potential misuse risks due to lack of safety guardrails, which further supports the classification as an AI Incident. The involvement of AI systems is clear, and the harm is realized through the theft and potential misuse risks, not merely a future possibility. Hence, the event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Anthropic wirft chinesischen Unternehmen unlautere Praktiken vor

2026-02-24
heise online
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Claude) and its misuse by other companies to train competing AI models illicitly. This misuse constitutes a violation of intellectual property rights and usage agreements, which falls under harm category (c). Additionally, Anthropic warns of national security risks from such unauthorized use, indicating significant harm. Since the misuse has already taken place and caused these harms, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

US AI giants accuse Chinese rivals of mass data theft | Mint

2026-02-23
mint
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (Claude chatbot and the Chinese firms' AI models) and their use in a manner that directly leads to harm, specifically the violation of intellectual property rights and breach of export controls. The illicit distillation technique used to siphon capabilities without independent development constitutes a breach of obligations intended to protect intellectual property rights, fulfilling the criteria for an AI Incident. Additionally, the potential national security risks from models lacking safety guardrails further support the classification as an incident rather than a mere hazard or complementary information.
Thumbnail Image

Anthropic says Chinese companies misused Claude AI; Elon Musk lashes out | Company Business News

2026-02-24
mint
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (Claude AI and other AI models) and their misuse through unauthorized large-scale data extraction and model distillation. The described activities could plausibly lead to AI incidents such as intellectual property violations, loss of control over AI capabilities, and potential misuse in military or surveillance contexts, which are significant harms. However, the article does not document any actual harm or incident resulting from these actions yet, only the identification of the threat and the urgency to respond. The presence of accusations and disputes about data theft and training practices further supports the potential for legal and ethical issues but does not confirm realized harm. Hence, the classification as an AI Hazard is appropriate.
Thumbnail Image

Anthropic控3家中企竊取模型數據 恐釀國安風險

2026-02-24
UDN
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (Anthropic's Claude chatbot and the derived models) and describes the unauthorized use and replication of AI capabilities, which is a misuse of AI technology. The harm includes violations of intellectual property rights and the plausible risk of national security threats due to the loss of safety controls in the stolen models. Since the harm is realized (the theft and replication have occurred) and the risks are significant, this qualifies as an AI Incident under the framework, as it directly or indirectly leads to violations of intellectual property rights and potential harm to communities and national security.
Thumbnail Image

Anthropic Says DeepSeek, MiniMax Distilled AI Models for Gains

2026-02-23
Bloomberg Business
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (Anthropic's Claude models and rival AI models) and describes the misuse of these systems through distillation campaigns that violate terms of service and intellectual property rights. The harm is realized as unauthorized extraction and use of AI outputs to improve competing AI products, which constitutes a breach of intellectual property rights and unfair competition. The involvement of fraudulent accounts and proxy services to evade detection further supports the classification as an AI Incident. The event is not merely a potential risk or complementary information but a concrete case of misuse causing harm.
Thumbnail Image

Anthropic exposes how Chinese AI firms try to steal LLM tech

2026-02-23
Mashable
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (large language models) and their misuse through distillation attacks to illicitly extract technology, which is a violation of intellectual property rights. However, it does not describe a concrete AI Incident where harm has already occurred (e.g., legal rulings, direct injury, or operational disruption) nor does it describe a plausible future harm event that is imminent or narrowly averted. Instead, it exposes ongoing illicit activity and calls for coordinated action, which fits the definition of Complementary Information as it enhances understanding of AI ecosystem risks and responses. The mention of national security concerns and the scale of the attacks underscores the significance but does not elevate the event to an AI Incident or AI Hazard classification based on the provided definitions.
Thumbnail Image

「竊取」人工智慧模型? 美Anthropic控陸DeepSeek

2026-02-24
中時新聞網
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (Anthropic's Claude and DeepSeek's models) and describes the use and development of AI systems through unauthorized large-scale access and data extraction. The alleged 'theft' of model capabilities and training data constitutes a violation of intellectual property rights, a recognized harm under the AI Incident definition. Additionally, the potential circumvention of safety and censorship rules implies risks of misuse and harm. Since these harms are occurring or have occurred (unauthorized use and IP violation), the event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Anthropic指控深度求索等中國公司不當獲取其數據

2026-02-24
New York Times (Chinese)
Why's our monitor labelling this an incident or hazard?
The event describes the unauthorized use of AI-generated data to train other AI systems, which constitutes a violation of intellectual property rights and terms of service. Although this misuse is ongoing, the article does not indicate direct harm such as legal rulings, damages, or other consequences yet. Therefore, it represents a plausible risk of harm related to AI system misuse and intellectual property infringement, fitting the definition of an AI Hazard rather than an AI Incident. It is not merely complementary information because it reports a specific misuse event with potential legal and ethical implications.
Thumbnail Image

Anthropic指控深度求索等中国公司不当获取其数据

2026-02-24
New York Times (Chinese)
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (Anthropic's Claude chatbot and other AI chatbots under development). The misuse of data scraped from an AI system to train other AI systems constitutes a violation of intellectual property rights and terms of service, which falls under harm category (c) - violations of rights. Since the data scraping and use have already occurred, this is a realized harm, not just a potential risk. Therefore, this qualifies as an AI Incident due to the direct involvement of AI systems and the resulting violation of rights through unauthorized data use.
Thumbnail Image

Anthropic accuses Chinese AI firms of data copying using fake accounts and AI distillation methods

2026-02-23
Economic Times
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (Claude AI and competing AI models) and describes the use of fake accounts to extract data and AI outputs to train other models, which is a misuse of AI system outputs. This misuse directly leads to a violation of intellectual property rights, a recognized harm under the AI Incident definition. Additionally, the potential national security risks mentioned further underscore the seriousness of the harm. Since the harm is realized (unauthorized data copying and use) and the AI system's role is pivotal, this is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Anthropic accuses three Chinese companies of harvesting its data - The Economic Times

2026-02-24
Economic Times
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (Anthropic's Claude chatbot and the Chinese companies' AI systems) and the unauthorized use of data from one AI system to train others, which is a misuse of AI technology. The harm includes violations of intellectual property rights (unauthorized data harvesting and use), which is a breach of legal protections, and potential national security risks from misuse of AI technologies. These harms have materialized or are ongoing, as the data harvesting has already occurred and is being used to train other AI systems. The event also references legal disputes and national security concerns, reinforcing the classification as an AI Incident rather than a mere hazard or complementary information.
Thumbnail Image

Days after OpenAI warning, Anthropic accuses three Chinese AI labs of extracting its data

2026-02-24
Economic Times
Why's our monitor labelling this an incident or hazard?
The event involves the misuse of an AI system (Anthropic's Claude) through unauthorized data extraction and model distillation by other AI labs, which directly violates intellectual property rights and terms of service. The misuse has already happened, with millions of interactions using fraudulent accounts, and the copied models may lack safety safeguards, posing risks of misuse and harm. This constitutes a violation of rights and potential harm to communities and security, fitting the definition of an AI Incident rather than a hazard or complementary information.
Thumbnail Image

After OpenAI, Anthropic warn US government on China; say Chinese AI models are stealing our data and the speed ... - The Times of India

2026-02-24
The Times of India
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (Claude and other AI models) and their misuse through illicit distillation, which is a form of unauthorized use of AI outputs to train other AI systems. This misuse directly leads to violations of intellectual property rights and raises security concerns that could result in harm. The involvement of AI systems in the development and use stages, the realized violation of rights, and the potential for dangerous misuse meet the criteria for an AI Incident rather than a mere hazard or complementary information. Therefore, the classification as AI Incident is justified.
Thumbnail Image

Anthropic flags large-scale 'distillation' attempts by China's DeepSeek, MiniMax and Moonshot - CNBC TV18

2026-02-24
cnbctv18.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use and misuse of an AI system (Claude) by multiple actors to extract capabilities illicitly, which is a direct misuse of the AI system. The harm includes violation of terms of service, potential breach of intellectual property rights, and undermining of export controls, which fall under violations of legal obligations and rights. The large-scale and coordinated nature of the campaign indicates significant harm beyond a single company, affecting the broader AI ecosystem and policy environment. Therefore, this qualifies as an AI Incident due to realized harm from misuse of AI systems leading to legal and rights violations.
Thumbnail Image

Anthropic accuses DeepSeek and other Chinese firms of using Claude to train their AI

2026-02-23
The Verge
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (Claude and the illicitly trained models) and describes misuse in the development and use of AI. The harms described (offensive cyber operations, disinformation, mass surveillance) are serious and relate to violations of human rights and harm to communities. Since these harms are presented as potential consequences rather than realized incidents, the event fits the definition of an AI Hazard rather than an AI Incident. The article also calls for industry and legislative responses, indicating recognition of the plausible future harm. Therefore, the classification is AI Hazard.
Thumbnail Image

Anthropic accuses DeepSeek, MiniMax of data copying, distillation attacks

2026-02-24
Business Standard
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (Claude models) and details how rival AI developers used fraudulent means to extract proprietary outputs at scale, which is a direct breach of terms and intellectual property rights. The harm is realized as it involves unauthorized data copying and model output extraction, which is a violation of intellectual property rights. Anthropic's response and the scale of the campaigns confirm the seriousness and direct involvement of AI systems in causing this harm. Hence, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Anthropic Says Chinese AI Companies Improved Models By 'Illicitly' Copying Its Capabilities

2026-02-24
Gizmodo
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (large language models) and their outputs being used without authorization to train competing models, which is a misuse of AI development and use. This misuse leads to a violation of intellectual property rights and contractual obligations, which falls under harm category (c) in the AI Incident definition. Although no physical harm or direct legal criminal charges are mentioned, the harm to rights and competitive advantage is clear and directly linked to the AI systems' development and use. Hence, this is classified as an AI Incident.
Thumbnail Image

Chinese companies distilled Claude to improve own models, Anthropic says

2026-02-23
CNA
Why's our monitor labelling this an incident or hazard?
The event involves the use and misuse of an AI system (Claude) to extract knowledge and improve other AI models, which is a direct involvement of AI systems. However, the article does not describe any actual harm occurring yet, such as violations of rights, health, or property damage. The misuse is ongoing and poses a credible risk to the AI ecosystem and companies' intellectual property, but no direct or indirect harm has materialized as per the article. Therefore, this situation is best classified as an AI Hazard, reflecting the plausible future harm from unauthorized AI model replication and capability theft.
Thumbnail Image

Anthropic acusa DeepSeek e outras chinesas de usar Claude para treinar modelos de IA

2026-02-24
Olhar Digital - O futuro passa primeiro aqui
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (Claude and other AI models) and their development and use. The unauthorized large-scale extraction and use of Claude's capabilities by other companies directly violates intellectual property rights, a form of harm under the AI Incident criteria. The article reports that this has already occurred, not just a potential risk, thus constituting a realized harm. Although there are warnings about possible future misuse, the primary classification is AI Incident because the harm has materialized. The event is not merely a policy discussion or a general AI news item but details a specific harmful event involving AI systems.
Thumbnail Image

Anthropic accuses Chinese AI firms of illicitly extracting Claude capabilities in large-scale data theft

2026-02-24
Firstpost
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (Claude and the derivative models) and their illicit use by other AI firms. The harm includes intellectual property theft (a violation of intellectual property rights) and potential national security risks due to unsafe AI models derived from stolen capabilities. The use of millions of interactions and fake accounts to extract capabilities constitutes misuse of AI systems leading to significant harm. The event describes realized harm rather than just potential harm, as the illicit replication and evasion of export controls are ongoing and documented. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

美AI新創Anthropic控3中國企業設假帳號 竊取模型數據可能轉為軍用 - 國際 - 自由時報電子報

2026-02-23
Liberty Times Net
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (Anthropic's Claude model and Chinese AI companies' models). The misuse consists of creating fake accounts to extract data from Claude, which is then used to train competing AI models. This unauthorized data extraction constitutes a violation of intellectual property rights, a form of harm under the framework. Furthermore, the potential conversion of these capabilities for military or intelligence use raises concerns about harm to communities and national security. Since the harm is occurring through misuse and unauthorized data extraction, this qualifies as an AI Incident rather than a mere hazard or complementary information.
Thumbnail Image

Anthropic accuse DeepSeek de piller ses modèles d'IA

2026-02-23
France 24
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (Claude and DeepSeek's models) and describes the use of AI to extract proprietary model information and responses to develop competing AI models. This constitutes a violation of intellectual property rights, which is a recognized harm under the AI Incident definition. The harm is realized as Anthropic accuses DeepSeek of 'pillage' of its AI models, indicating actual unauthorized use rather than a mere potential risk. Hence, this qualifies as an AI Incident due to the direct link between AI system use and violation of intellectual property rights.
Thumbnail Image

Anthropic控3家中企竊取Claude模型數據 恐釀國安風險 | 科技 | 中央社 CNA

2026-02-24
Central News Agency
Why's our monitor labelling this an incident or hazard?
The event involves the use and misuse of AI systems, specifically the Claude AI model, where the unauthorized extraction and replication of its capabilities by other AI companies has directly led to intellectual property theft, a violation of rights, and potential national security risks. The harm includes violation of intellectual property rights and plausible future harm related to safety and security risks from models lacking proper safeguards. Since the harm is both realized (theft of IP) and potential (national security risks), this qualifies as an AI Incident.
Thumbnail Image

Anthropic acusa laboratórios chineses de IA, incluindo a DeepSeek, de extraírem informação para melhorar os seus modelos

2026-02-23
Observador
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (Claude and other AI models) and their use in training other AI models through unauthorized data extraction. This constitutes a violation of intellectual property rights, which is a recognized harm under the AI Incident definition (c). The allegations describe realized harm through illicit use of AI outputs and data, not just potential harm. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information. The event is not merely about general AI news or policy but concerns direct misuse of AI systems leading to rights violations.
Thumbnail Image

US AI giants accuse Chinese rivals of mass data theft

2026-02-23
Punch Newspapers
Why's our monitor labelling this an incident or hazard?
The event involves the use and misuse of AI systems, specifically the Claude chatbot, whose outputs were exploited by other AI firms to develop competing models without authorization. This constitutes a violation of intellectual property rights, a recognized harm under the AI Incident definition. The article also highlights national security concerns and the circumvention of export controls, indicating significant legal and safety implications. Since the harm has already occurred through the illicit extraction and use of AI capabilities, this is not merely a potential risk but an actual incident involving AI systems.
Thumbnail Image

Anthropic accuses Chinese AI labs of mining Claude as US debates AI chip exports | TechCrunch

2026-02-23
TechCrunch
Why's our monitor labelling this an incident or hazard?
The event involves the use and misuse of AI systems, specifically the unauthorized extraction of capabilities from Anthropic's Claude AI model by other AI labs through distillation. This misuse directly leads to violations of intellectual property rights and poses potential harm to national security by enabling the proliferation of AI models without necessary safeguards. These harms fall under the definition of an AI Incident, as the development and use of AI systems have directly led to significant harms including violation of intellectual property rights and risks to security. Therefore, the event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

傳Anthropic、五角大廈有分歧 軍方合約可能生變

2026-02-23
工商時報
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (Claude) and its intended use by the military, with ethical and safety concerns leading to a possible contract cancellation. No direct or indirect harm has occurred yet, but the disagreement signals a credible risk or challenge in AI deployment for military purposes. This fits the definition of an AI Hazard, as the event could plausibly lead to harm or incidents if unresolved, but no harm has materialized at this stage.
Thumbnail Image

Anthropic指控中国DeepSeek"盗用"其人工智能模型

2026-02-23
RFI
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (Anthropic's Claude and DeepSeek's AI models) and details the use and misuse of these systems in a way that has directly led to harm, specifically the alleged theft of AI model technology and intellectual property rights violations. The large-scale unauthorized use of Claude to train competing models constitutes a breach of obligations protecting intellectual property rights, which is a recognized form of AI harm. Furthermore, the potential for the misused AI to evade safety and censorship rules raises concerns about indirect harms related to human rights and misuse. Since actual harm (intellectual property theft) has occurred and is central to the event, it is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Anthropic指控中國DeepSeek"盜用"其人工智能模型

2026-02-23
RFI
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (Anthropic's Claude and DeepSeek's AI models) and concerns the use and development of these AI systems. The alleged unauthorized use of Claude's outputs to train competing models constitutes a violation of intellectual property rights, which is a breach of obligations under applicable law protecting intellectual property rights. The potential circumvention of safety rules and the focus on sensitive topics suggest risks of misuse and harm to rights related to information access and censorship. Since these harms are described as occurring (the unauthorized use and potential for misuse), this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Gigantes de la IA de EU acusan a rivales chinos de robo masivo de datos

2026-02-23
El Economista
Why's our monitor labelling this an incident or hazard?
The event describes the illicit use of AI systems' outputs to replicate capabilities without authorization, which is a misuse of AI development and use. While no direct harm has been reported, the potential for these illicitly obtained AI models to lack safety controls and be used for harmful purposes (e.g., biological weapons, cyberattacks) presents a credible risk. This aligns with the definition of an AI Hazard, as the development and use of these AI systems could plausibly lead to an AI Incident involving harm to security and public safety. The event does not describe an actual realized harm yet, so it is not an AI Incident. It is more than complementary information because it reports a significant risk and illicit activity involving AI systems.
Thumbnail Image

You've Stolen Data Too: Elon Musk & The Internet Call Out Anthropic's Hypocrisy After Chinese Data Theft Announcement

2026-02-24
Free Press Journal
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (Claude and other AI models) and their training data. The accusations and lawsuits concern unauthorized use of copyrighted material for training AI, which constitutes a violation of intellectual property rights, a recognized harm under the AI Incident definition. The involvement of multiple parties and the ongoing legal actions confirm that harm has occurred, not just potential harm. The event also highlights ethical and legal disputes around AI data sourcing, reinforcing the classification as an AI Incident rather than a hazard or complementary information. The presence of direct harm (copyright infringement) and the AI system's role in causing this harm justify this classification.
Thumbnail Image

Anthropic acusa DeepSeek de usar o Claude para melhorar seus modelos de IA * Tecnoblog

2026-02-23
Tecnoblog
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (Claude and other AI models) and the use of AI distillation techniques. The unauthorized large-scale use of Claude to train other AI models constitutes a violation of intellectual property rights and terms of service, which falls under harm category (c) violations of rights. The accusation of threat to national security further supports the significance of harm. Since the event describes realized misuse and harm, it qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Anthropic accuses three Chinese AI labs of abusing Claude to improve their own models

2026-02-23
engadget
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (Claude and competing AI models) and their development and use. The alleged 'distillation attacks' represent misuse of AI outputs to train other models, which could lead to intellectual property rights violations and circumvention of safeguards, posing a plausible risk of harm. However, the article does not report actual realized harm such as injury, disruption, or confirmed rights violations caused by these actions. The mention of a lawsuit against Anthropic for training data use is a legal proceeding related to AI development. The main narrative centers on accusations, potential misuse, and responses, fitting the definition of Complementary Information as it provides updates and context on AI ecosystem developments and governance responses rather than reporting a new AI Incident or AI Hazard.
Thumbnail Image

傳Anthropic、五角大廈有分歧 軍方合約可能生變-MoneyDJ理財網

2026-02-23
MoneyDJ理財網
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Claude) used by the military and discusses a dispute over its permitted uses, including refusal to allow use in lethal autonomous weapons or domestic surveillance. This indicates the AI system's development and use are central to the event. Although no direct harm or incident has occurred, the potential for harm exists if the AI were used in ways Anthropic opposes. The Department of Defense's consideration to cancel the contract and labeling Anthropic as a supply chain risk further underscores the plausible risk of harm or operational disruption. Since the article does not report any realized harm but focuses on potential future risks and contractual disputes, the event fits the definition of an AI Hazard.
Thumbnail Image

Anthropic控中企用假帐号 窃模型数据恐转军用 - 国际 - 带你看世界

2026-02-24
星洲日报
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (Anthropic's Claude and the Chinese companies' AI models) and describes the misuse of AI through fake accounts to extract proprietary model data. This misuse directly leads to violations of intellectual property rights and raises national security concerns, which fall under harm categories (c) violations of rights and (d) harm to communities or states. The involvement of AI in the development and use phases is clear, and the harm is realized through unauthorized data extraction and potential military application. Hence, the event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Anthropic称深度求索通过蒸馏Claude模型谋取优势

2026-02-24
早报
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (Claude model and other AI models) and concerns the use and misuse of AI outputs for training other AI systems. The alleged 'distillation' practice violates service terms and involves unauthorized use of AI outputs, which can be considered a breach of intellectual property rights. However, the article does not describe any direct or indirect harm that has materialized yet, such as legal rulings, damages, or other harms. The focus is on the potential for misuse and the concerns raised by Anthropic and US officials. Therefore, this event fits best as an AI Hazard, as it plausibly could lead to an AI Incident involving intellectual property violations or other harms if the practice continues or escalates.
Thumbnail Image

DeepSeek等中企被指非法挖取美AI模型能力 | Claude模型 | Anthropic | 美国人工智能 | 大纪元

2026-02-23
The Epoch Times
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (Claude chatbot and the Chinese companies' AI models) and describes the use and misuse of these AI systems to illegally extract model capabilities, which is a direct violation of intellectual property rights and service terms. The misuse has already occurred, with millions of interactions and thousands of fake accounts involved, indicating realized harm. Furthermore, the article highlights the potential for significant national security risks due to the lack of safety safeguards in the illegally distilled models, which could lead to cyberattacks and misinformation campaigns. These harms fall under violations of intellectual property rights and harm to communities and national security, meeting the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

DeepSeek等中企被指非法挖取美AI模型能力 | Claude模型 | Anthropic | 美國人工智能 | 大紀元

2026-02-23
The Epoch Times
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (Claude and ChatGPT) and describes their unauthorized use by Chinese companies to extract AI model capabilities illegally. This constitutes a violation of intellectual property rights and service terms, which is a breach of legal obligations protecting intellectual property. Furthermore, the article details the potential for these illegally obtained AI capabilities to be used in harmful ways, including cyberattacks and misinformation, posing national security risks. These harms are either occurring or highly plausible given the described activities. Hence, this event meets the criteria for an AI Incident due to direct and indirect harm caused by the AI systems' misuse and the resulting legal and security violations.
Thumbnail Image

Anthropic AI claims that it has identified "industrial-scale distillation attacks" by Chinese AI company DeepSeek

2026-02-24
Asian News International (ANI)
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (large language models and their distillation). The alleged activity is the illicit extraction of AI model capabilities through distillation attacks, which is a misuse of AI development and use. While no direct harm has been reported, the potential for harm is credible and significant, including intellectual property violations and risks of AI capabilities being used in military or surveillance contexts without safeguards. The event does not describe realized harm but highlights a credible risk that could lead to AI Incidents. Hence, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Anthropic accuse DeepSeek de piller ses modèles d'IA pour construire...

2026-02-23
Le Devoir
Why's our monitor labelling this an incident or hazard?
The event involves the use and development of AI systems, specifically the unauthorized use of Anthropic's AI system Claude by DeepSeek and others to train their own models. While this "distillation" method is not inherently illegal, the scale and intent described suggest a misuse that could plausibly lead to harms such as enabling malicious uses or censorship circumvention. No direct harm has been reported yet, so this situation represents a credible risk of future harm rather than an incident with realized harm. The article also discusses governance and ethical responses, but these are secondary to the main narrative about the alleged misuse. Therefore, the event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Anthropic指控三家中企違規蒸餾其AI模型,包括DeepSeek稀宇及月之暗面

2026-02-24
ET Net
Why's our monitor labelling this an incident or hazard?
The event involves the use and misuse of AI systems (large language models) and alleges violations of intellectual property rights through unauthorized extraction of AI model outputs for training competing models. This constitutes a breach of obligations under applicable law protecting intellectual property rights, which fits the definition of an AI Incident. The harm is indirect but material, as it involves unauthorized use of AI outputs to develop competing AI systems, potentially undermining the original developer's rights and market position.
Thumbnail Image

Anthropic accuses Chinese AI firms of misusing Claude, Elon Musk fires back

2026-02-24
Digit
Why's our monitor labelling this an incident or hazard?
Anthropic's claim that three Chinese AI firms created over 24,000 fake accounts and generated over 16 million interactions with Claude to extract its capabilities for training their own AI models indicates a clear violation of intellectual property rights and misuse of the AI system. This misuse directly harms Anthropic by stealing proprietary data and undermining their competitive advantage. The involvement of AI systems is explicit, and the harm is realized, not just potential. Therefore, this qualifies as an AI Incident under the framework, specifically as a violation of intellectual property rights (c). Elon Musk's comments, while relevant, do not change the classification of the primary event.
Thumbnail Image

Anthropic accuses China's AI labs of ripping off content - just like it did

2026-02-24
Democratic Underground
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (large language models) and their development/use through model distillation. The allegations relate to unauthorized use of data and potential copyright infringement, which falls under violations of intellectual property rights. However, the article does not describe a concrete AI Incident where harm has already occurred or a hazard where harm is plausible but not realized. Instead, it reports accusations and ongoing lawsuits, which are governance and legal responses to AI-related issues. Thus, it fits the definition of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

不愿用AI监控国民和军事化 Anthropic或失美国防部合约

2026-02-24
東方網 馬來西亞東方日報
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (Anthropic's AI models) and their potential military use, which could plausibly lead to harms such as violations of human rights or harm to communities if used for mass surveillance or autonomous weapons. Although no harm has yet occurred, the dispute highlights credible risks associated with AI deployment in defense. The event does not describe an actual incident or realized harm, nor is it merely complementary information or unrelated news. Hence, it fits the definition of an AI Hazard.
Thumbnail Image

Anthropic misanthropic toward China's AI labs

2026-02-24
TheRegister.com
Why's our monitor labelling this an incident or hazard?
The article involves AI systems explicitly (large language models and model distillation techniques) and discusses their unauthorized use and potential misuse. The harms described (national security risks, cyberattacks, disinformation) are plausible future harms that could arise if illicitly distilled models proliferate without safeguards. No actual harm or incident is reported as having occurred yet; the concerns are about potential misuse and risks. Therefore, this event fits the definition of an AI Hazard, as it plausibly could lead to AI Incidents involving harm to communities, security, and rights, but no direct or indirect harm has been documented in the article at this time.
Thumbnail Image

China bajo la lupa: Anthropic acusa a tres firmas de robar secretos de su IA Claude

2026-02-23
La Nación, Grupo Nación
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (Anthropic's Claude chatbot and the accused firms' AI models) and describes a direct misuse of AI outputs to steal proprietary AI capabilities, which is a violation of intellectual property rights, a recognized harm under the AI Incident definition. Additionally, the circumvention of safety controls poses security risks, further supporting the classification as an AI Incident. The harm is realized (the theft has occurred), not merely potential, so it is not an AI Hazard. The event is not merely complementary information or unrelated news, as it details a specific harmful event involving AI misuse.
Thumbnail Image

Anthropic不忍了!控DeepSeek等三家中國AI新創「抄襲」:狂刷1600萬次提問偷學Claude

2026-02-24
數位時代
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (Claude and the accused companies' AI models) and describes the misuse of AI outputs for unauthorized training, which is a breach of intellectual property rights, a recognized harm under the framework. The large-scale, organized scraping and use of AI-generated content to replicate capabilities without consent directly leads to harm. Additionally, the potential for these capabilities to be used in military and surveillance contexts adds to the severity of the incident. Hence, the event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

US AI giants accuse Chinese rivals of mass data theft

2026-02-23
The Manila times
Why's our monitor labelling this an incident or hazard?
The event involves the use and misuse of AI systems leading to violations of intellectual property rights, which is a breach of obligations under applicable law protecting intellectual property rights, thus constituting harm under category (c). The unauthorized extraction and replication of AI capabilities through distillation is a direct misuse of AI systems causing harm. Additionally, the event highlights national security risks due to the lack of safety guardrails in the illicitly obtained models, implying potential future harms. Since the harm (intellectual property theft) has already occurred and is significant, this qualifies as an AI Incident rather than a hazard or complementary information. The involvement of AI systems is explicit, and the harm is directly linked to their misuse.
Thumbnail Image

Gigantes de IA dos EUA acusam rivais chineses de roubo em massa de dados

2026-02-23
ISTOÉ Independente
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (chatbots and AI models) and describes the misuse of AI outputs to illicitly replicate capabilities, constituting a violation of intellectual property rights and potential security risks. The misuse has already occurred, involving millions of interactions and fake accounts, indicating realized harm. Therefore, this qualifies as an AI Incident due to the direct or indirect harm caused by the misuse of AI systems leading to violations of intellectual property rights and potential broader harms to security.
Thumbnail Image

Anthropic accuses DeepSeek, other Chinese AI firms of model theft

2026-02-24
NewsBytes
Why's our monitor labelling this an incident or hazard?
The article involves AI systems explicitly, focusing on the development and use of AI models. The alleged theft and misuse of AI models by foreign entities for authoritarian purposes could plausibly lead to significant harms such as human rights violations, disinformation campaigns, and cyberattacks. Since no specific harm has yet been reported but the potential for serious future harm is clearly articulated, this event qualifies as an AI Hazard rather than an AI Incident. The concerns about misuse in military and surveillance contexts further support the classification as a plausible future risk.
Thumbnail Image

Anthropic slams Chinese AI firms for harvesting data from its Claude chatbot - SiliconANGLE

2026-02-23
SiliconANGLE
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (Claude chatbot and other AI chatbots) and the unauthorized use of data generated by one AI system to train others, which is a misuse of AI technology. The harm includes violations of intellectual property rights (copyright infringement) and potential national security risks, which fall under the defined harms (c) violations of rights and (e) other significant harms. The involvement of AI is direct and central to the incident, and the harm is realized through illegal data harvesting and ongoing lawsuits. Thus, this is classified as an AI Incident.
Thumbnail Image

Gigantes de IA dos EUA acusam rivais chineses de roubo em massa de dados

2026-02-23
O Globo
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (chatbots and AI models) and their development and use. The alleged unauthorized data extraction and model distillation could plausibly lead to harms such as violation of intellectual property rights and risks to national security through unsafe AI capabilities. However, the article does not report actual realized harm or incidents but rather warns of potential risks and calls for coordinated responses. Thus, it fits the definition of an AI Hazard, as it plausibly could lead to an AI Incident but has not yet directly or indirectly caused harm.
Thumbnail Image

Anthropic says DeepSeek, Moonshot, and MiniMax used 24,000 fake accounts to rip off Claude

2026-02-23
VentureBeat
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Anthropic's Claude) and details how its misuse through fraudulent accounts has directly led to intellectual property theft and potential national security harms. The misuse is deliberate and large-scale, involving sophisticated techniques to evade detection and extract capabilities, which fits the definition of an AI Incident as the AI system's use has directly led to violations of rights and significant harms. The national security framing and the scale of the operation confirm the severity and realized nature of the harm, distinguishing it from a mere hazard or complementary information. Thus, the classification as an AI Incident is justified.
Thumbnail Image

Critics Mock Anthropic's Claims Chinese AI Labs Are Stealing Its Data - Decrypt

2026-02-23
Decrypt
Why's our monitor labelling this an incident or hazard?
The event involves the use and misuse of AI systems, specifically the alleged unauthorized extraction of AI model outputs to train competing models, which constitutes a form of intellectual property violation and potential breach of legal protections. This misuse has already occurred at scale (over 16 million exchanges), indicating realized harm in terms of violation of intellectual property rights and potential weakening of export controls. Anthropic's concerns about the use of distilled models in military and surveillance contexts further highlight potential broader harms. Therefore, this event qualifies as an AI Incident due to the realized violation of rights and the direct involvement of AI systems in causing harm.
Thumbnail Image

Anthropic Rallies Industry to Combat AI Model Theft | PYMNTS.com

2026-02-23
PYMNTS.com
Why's our monitor labelling this an incident or hazard?
The event involves the misuse of AI systems (model distillation attacks) that have already occurred, leading to unauthorized replication of AI capabilities. This misuse could directly or indirectly lead to harms including the deployment of unsafe AI models and the circumvention of export controls, which are significant harms under the framework. Although no specific harm is reported as realized yet, the scale and nature of the attacks and the potential consequences described indicate a credible risk of harm. Therefore, this event qualifies as an AI Incident due to the realized misuse and the associated significant risks.
Thumbnail Image

Anthropic Alleges DeepSeek, MiniMax Trained Models With Claude

2026-02-23
Tech.co
Why's our monitor labelling this an incident or hazard?
The article involves AI systems explicitly (Anthropic's Claude and rival AI models) and discusses the use of AI outputs for training other models, which relates to intellectual property rights. However, it does not describe any direct or indirect harm occurring yet, such as legal rulings, damages, or operational disruptions. The focus is on allegations and the competitive dynamics in AI development, which is informative but does not meet the threshold for an AI Incident or AI Hazard. Thus, it fits the definition of Complementary Information, providing insight into AI development practices and disputes without reporting a specific harm or credible future harm event.
Thumbnail Image

Anthropic accuses China of siphoning Claude data via 24,000 fake accounts - Cryptopolitan

2026-02-23
Cryptopolitan
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (Anthropic's Claude and other AI models) and describes a large-scale unauthorized use of the AI system outputs by foreign entities to train competing models. This constitutes misuse of AI system outputs and raises concerns about potential future harms, especially national security risks if these capabilities are integrated into military or surveillance systems. Since no direct harm or incident has been reported yet, but the potential for harm is credible and clearly articulated, the event fits the definition of an AI Hazard rather than an AI Incident. The article also includes market reactions and new product announcements, but these are secondary to the main issue of unauthorized data extraction and its implications.
Thumbnail Image

Anthropic Claims Chinese AI Firms Illegally Copied Claude in Massive 'Distillation Attacks'

2026-02-24
Tech Times
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (Claude and the alleged copied models) and details unauthorized use and replication of AI capabilities through distillation attacks. This unauthorized use constitutes a breach of intellectual property rights, which is a violation of applicable law protecting intellectual property rights, fitting the definition of harm (c) under AI Incident. The harm is realized, not just potential, as the unauthorized processing of millions of exchanges has already occurred. Although the event notes that these actions are not criminal, the breach of contractual and legal obligations related to AI use is clear. Hence, the event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Anthropic accuses Chinese firms of stealing data from Claude | News.az

2026-02-23
News.az
Why's our monitor labelling this an incident or hazard?
The event involves the unauthorized use of an AI system (Claude) through distillation attacks, which is a misuse of the AI system's outputs. The misuse is linked to potential future harms including military and surveillance applications, disinformation, and cyber operations, which could significantly impact human rights, security, and communities. Although no direct harm has been reported yet, the credible risk of such harms is clearly articulated. Hence, the event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because it directly concerns AI misuse with potential for harm.
Thumbnail Image

Anthropic alleges industrial-scale Claude attacks by DeepSeek and other Chinese AI rivals

2026-02-23
Crypto Briefing
Why's our monitor labelling this an incident or hazard?
The event involves the use and misuse of an AI system (Claude) by other AI labs to extract its capabilities illicitly, which constitutes misuse of AI technology. The unauthorized distillation could lead to the creation of AI models lacking safety measures, posing risks of harm such as misuse in cyber operations or biological threats, which are significant harms. Although no direct harm is reported as having occurred yet, the large-scale illicit activity and the potential for unsafe AI models indicate a plausible risk of harm. Therefore, this event qualifies as an AI Hazard due to the credible potential for significant harm stemming from the misuse of AI systems.
Thumbnail Image

Anthropic指控多家中国公司"蒸馏"Claude模型

2026-02-23
The Wall Street Journal - China
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (Claude) and the alleged misuse of its outputs by other companies through fraudulent accounts to train their own AI models. This constitutes a violation of intellectual property rights, which is a breach of obligations under applicable law protecting such rights. Since the misuse has already occurred and is described as ongoing, it qualifies as an AI Incident due to the realized harm of rights violation through unauthorized data extraction and model distillation.
Thumbnail Image

Anthropic accuses Chinese labs of trying to illicitly take Claude's capabilities

2026-02-23
CyberScoop
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (Anthropic's Claude and the Chinese labs' AI models) and describes the illicit use of AI development techniques (distillation) to replicate capabilities without authorization. The potential harms include offensive cyber operations, disinformation, and mass surveillance, which are serious harms to communities and rights. However, the article does not document that these harms have yet occurred, only that the illicit activity is ongoing and could plausibly lead to such harms. Hence, it fits the definition of an AI Hazard rather than an AI Incident. It is not merely complementary information because the main focus is on the illicit activity and its risks, not on responses or broader ecosystem context. It is not unrelated because AI systems and their misuse are central to the event.
Thumbnail Image

Anthropic指控DeepSeek、MiniMax等「蒸餾」其成果 | 大陸政經 | 兩岸 | 經濟日報

2026-02-24
Udnemoney聯合理財網
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (Claude AI model and competing AI models) and describes the misuse of AI outputs through distillation, which is a form of unauthorized use of AI-generated data. This misuse directly violates intellectual property rights and service agreements, constituting a breach of obligations under applicable law protecting intellectual property rights. Furthermore, the potential use of distilled AI capabilities in military and surveillance contexts implies plausible future harm. Since the event reports ongoing misuse and associated harms, it qualifies as an AI Incident due to realized violations and risks. The article also mentions responses and calls for coordinated industry and policy action, but the primary focus is on the incident of misuse itself.
Thumbnail Image

Anthropic Slams China for AI Theft, But Critics Say the Outrage Is Hypocritical

2026-02-23
PCMag UK
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (Claude chatbot and other AI models) and describes unauthorized use and extraction of AI capabilities through fraudulent means, which directly leads to violations of intellectual property rights. This fits the definition of an AI Incident as it involves harm through breach of intellectual property rights caused by the misuse of AI systems. The presence of AI systems is clear, the misuse is described, and the harm is realized. The criticisms of Anthropic's own practices are background context and do not change the classification of the main event. Hence, the event is best classified as an AI Incident.
Thumbnail Image

Anthropic指控DeepSeek等中国AI大模型抄袭 遭马斯克贴脸开骂:贼喊捉贼 大规模窃秘数据

2026-02-24
驱动之家
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (large language models) and describes the misuse of AI techniques (distillation attacks) to illegally extract and replicate proprietary AI model capabilities. This misuse has directly led to violations of intellectual property rights and breaches of service terms, which are harms under the AI Incident definition. The involvement of multiple companies and the scale of the alleged attacks further support the classification as an AI Incident. Although the event also includes responses and counterclaims, the primary focus is on the realized harm caused by the unauthorized use of AI systems and data.
Thumbnail Image

还说别人蒸馏?马斯克抨击Anthropic:大规模盗用训练数据

2026-02-24
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The article describes a dispute involving the alleged large-scale theft of training data by Anthropic, which is directly related to the development of AI systems. Unauthorized use of training data constitutes a violation of intellectual property rights, which is a recognized harm under the AI Incident definition. The involvement of AI systems is explicit, and the harm is realized (not just potential). Hence, this event is classified as an AI Incident.
Thumbnail Image

Anthropic acusa a DeepSeek, Moonshot y MiniMax de destilar Claude con 16 millones de consultas

2026-02-23
DiarioBitcoin
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (Claude and the distilled models) and describes misuse of AI capabilities through illicit distillation. The misuse is ongoing and large-scale, with potential to cause significant indirect harms such as weakening safeguards, enabling malicious uses by state or non-state actors, and complicating export controls. However, the article does not report any actual realized harm such as injury, rights violations, or disruption caused by these distilled models yet. The focus is on the plausible future harms and risks arising from this illicit activity. Anthropic's response and call for coordinated action further support the classification as an AI Hazard rather than an Incident or Complementary Information. It is not unrelated, as the event centers on AI misuse and its risks.
Thumbnail Image

Anthropic: Chinese AI firms created 24,000 fraudulent accounts for 'distillation attacks'

2026-02-23
Mashable SEA
Why's our monitor labelling this an incident or hazard?
The event involves the use and misuse of AI systems (distillation attacks on large language models) to illicitly extract proprietary AI capabilities, which directly leads to a violation of intellectual property rights, a recognized form of harm under the AI Incident definition. The creation of fraudulent accounts and massive interaction volume indicates deliberate misuse of AI systems. The harm is materialized as the theft of AI technology and the undermining of the original company's rights and competitive position. Therefore, this event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Anthropic指控中国AI实验室大规模窃取Claude模型技术

2026-02-24
ai.zhiding.cn
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (Claude and the accused derivative models) and details the unauthorized use of AI outputs to replicate proprietary AI capabilities, constituting a violation of intellectual property rights. The large-scale nature of the extraction and the potential misuse of the stolen models for harmful activities (e.g., cyberattacks, misinformation, surveillance) indicate direct and indirect harm. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information. The involvement of AI systems in the development and use phases, the realized harm of IP theft, and the potential for further harm justify this classification.
Thumbnail Image

Anthropic指控多家中國AI企業違規「蒸餾」其模型

2026-02-24
香港經濟日報 hket.com
Why's our monitor labelling this an incident or hazard?
The event involves the use and misuse of AI systems, specifically the unauthorized extraction of AI model outputs to train other models, which is a violation of intellectual property rights. This constitutes a breach of obligations under applicable law intended to protect intellectual property rights, thus meeting the criteria for an AI Incident. The harm is realized as the unauthorized use has already occurred, not just a potential risk. Therefore, this is classified as an AI Incident.
Thumbnail Image

Anthropic accuses Chinese labs of 'industrial-scale' model stealing

2026-02-24
Asianet News Network Pvt Ltd
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (large language models) and describes unauthorized use and extraction of AI capabilities through fraudulent means. Although no direct harm has been reported, the potential for harm is credible and significant, including intellectual property violations and risks related to military and surveillance misuse. The event does not describe realized harm but highlights a serious threat that could plausibly lead to AI incidents. Therefore, it fits the definition of an AI Hazard rather than an Incident or Complementary Information. The lack of official response or governing laws further underscores the potential risk and need for coordinated action, reinforcing the hazard classification.
Thumbnail Image

Anthropic指控中國AI公司挖取Claude能力 籲管制晶片出口 | 鉅亨網 - 美股雷達

2026-02-24
Anue鉅亨
Why's our monitor labelling this an incident or hazard?
The event involves the use and development of AI systems, specifically the unauthorized distillation of AI model capabilities, which is linked to potential national security harms. Although no direct harm has been reported yet, the described scenario plausibly leads to significant harms including threats to safety and security due to the proliferation of AI models without adequate safeguards. Therefore, this constitutes an AI Hazard as it highlights credible risks stemming from AI misuse and proliferation that could lead to serious incidents in the future.
Thumbnail Image

Anthropic Sounds the Alarm: Chinese AI Labs Are Harvesting Claude's Intelligence as Washington Wrestles With Chip Export Controls

2026-02-23
WebProNews
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (Claude and derivative models) and discusses the use and misuse of AI outputs for training competing models, which is a form of intellectual property violation. However, the article does not report any realized harm such as legal rulings, direct injury, or confirmed breaches of rights but rather alleges ongoing unauthorized extraction and the potential erosion of U.S. AI leadership. The harm is indirect and potential, related to economic and strategic impacts rather than immediate physical or legal harm. Therefore, this event is best classified as an AI Hazard, as it plausibly could lead to significant harm if the practice continues unchecked, but no concrete incident of harm has been confirmed or detailed.
Thumbnail Image

Anthropic (Claude) accuse DeepSeek et d'autres IA chinoises d'utiliser ses données

2026-02-23
KultureGeek
Why's our monitor labelling this an incident or hazard?
The article describes the unauthorized use of an AI system's outputs to train competing models, which is a form of AI system use that could plausibly lead to significant harms, including national security threats and misuse in military or surveillance systems. Although the distillation practice is legal and common, the scale and intent here raise credible concerns about future harms. Since no direct harm or incident is reported, and the main focus is on the potential threat and unauthorized use, the event is best classified as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

DeepSeek、月之暗面、MiniMax被指大规模蒸馏Claude

2026-02-24
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (Claude and competing AI models) and details the use and misuse of AI outputs through large-scale fraudulent interactions to extract proprietary capabilities. This unauthorized extraction directly harms the rights of the model owner (Anthropic) by violating intellectual property and possibly other legal protections. The harm is realized, not just potential, as the attacks have been conducted at scale and attributed with high confidence. Hence, it meets the criteria for an AI Incident due to violations of intellectual property rights and the direct involvement of AI systems in causing harm.
Thumbnail Image

消息称马斯克的 xAI 与美国防部签约:Grok 大模型获准用于军方涉密系统

2026-02-24
新浪财经
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (large language models) in highly sensitive military contexts, which could plausibly lead to significant harms such as disruption of critical infrastructure, violations of rights, or harm to communities if misused or malfunctioning. Although no direct harm has been reported yet, the article highlights the potential for future risks inherent in deploying AI in classified military systems. Therefore, this qualifies as an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because the AI system's involvement and potential for harm are central to the report.
Thumbnail Image

三大国产 AI 遭点名!Anthropic「贼喊捉贼」,马斯克贴脸嘲讽

2026-02-23
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (Claude and other AI models) and their use/misuse. The alleged 'distillation attacks' involve extracting AI capabilities through extensive querying, which is a misuse of AI outputs to train competing models, implicating intellectual property rights violations. The article also references past legal consequences for Anthropic's own data practices, reinforcing the context of rights violations. The harms described include violations of intellectual property rights and legal obligations, fitting the definition of an AI Incident. Although some claims are allegations and the article notes they are 'one-sided,' the described activities have directly or indirectly led to legal disputes and rights violations, meeting the criteria for an AI Incident rather than a mere hazard or complementary information.
Thumbnail Image

Anthropic claims Chinese labs exploited AI model amid US export discussions

2026-02-24
NextBigWhat
Why's our monitor labelling this an incident or hazard?
While the misuse of the AI model by fake accounts suggests improper use of an AI system, the article does not report any actual harm such as injury, rights violations, or disruption caused by this exploitation. The focus is on the potential threat and geopolitical tensions, which aligns with a plausible risk scenario rather than a realized incident. Therefore, this event fits the definition of an AI Hazard, as it could plausibly lead to harm or security issues but no harm has been reported yet.
Thumbnail Image

Anthropic accuse DeepSeek de piller ses modèles d'IA [Tv5monde Afrique] https://information.tv5monde.com/economie/anthropic-accuse-deepseek-de-piller-ses-modeles-dia-2810918

2026-02-23
Africain.info
Why's our monitor labelling this an incident or hazard?
The event involves the use and development of AI systems, specifically allegations that DeepSeek misappropriated AI models from Anthropic. This constitutes a violation of intellectual property rights related to AI system development and use, which falls under harm category (c). Since the accusation implies that the AI system's development and use have led to a breach of legal and intellectual property rights, this qualifies as an AI Incident.
Thumbnail Image

DeepSeek等三中企被指非法窃取美AI模型能力

2026-02-23
botanwang.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (Claude chatbot) and the misuse of their capabilities by other AI companies through unauthorized 'distillation attacks.' This misuse has directly led to violations of intellectual property rights and poses significant national security risks, which are harms covered under the AI Incident definition (violations of intellectual property rights and harm to communities/national security). The involvement of AI is clear, the harm is realized (illegal extraction and use), and the consequences are significant. Thus, the classification as an AI Incident is appropriate.
Thumbnail Image

[AINews] Anthropic accuses DeepSeek, Moonshot, and MiniMax of >16 million "industrial-scale distillation attacks"

2026-02-24
latent.space
Why's our monitor labelling this an incident or hazard?
The article involves AI systems explicitly, focusing on alleged unauthorized extraction of AI model knowledge ('distillation attacks'), which implicates AI system development and use. The alleged actions, if true, would constitute violations of intellectual property rights, a recognized harm under the framework. However, since the article only reports accusations without evidence of actual harm or confirmed incidents, it fits the definition of an AI Hazard—an event where AI system misuse could plausibly lead to harm but has not yet done so. There is no indication of remediation or governance response that would classify this as Complementary Information, nor is it unrelated to AI. Hence, the classification as AI Hazard is appropriate.
Thumbnail Image

A Anthropic acusou três concorrentes chineses de estarem desenvolvendo seus próprios modelos de IA.

2026-02-24
avalanchenoticias.com.br
Why's our monitor labelling this an incident or hazard?
The article involves AI systems explicitly (Claude and competing AI models) and describes unauthorized use of AI model data through distillation, which is a misuse of AI development resources and a violation of intellectual property rights. However, the article does not describe any actual harm or incident resulting from this misuse, only the potential for harm and the need for industry-wide mitigation. Therefore, this event fits the definition of an AI Hazard, as it plausibly could lead to an AI Incident (e.g., intellectual property violations, unfair competition, or other harms) but no direct harm has yet been reported.
Thumbnail Image

Anthropic Accuses Chinese AI Labs DeepSeek, Moonshot, and MiniMax of Stealing Claude Capabilities

2026-02-23
Trending Topics
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (large language models) and describes their misuse through unauthorized querying and distillation attacks, which directly lead to harms such as intellectual property theft and the creation of AI models lacking safeguards. These unprotected models pose significant risks of harm to human rights, security, and potentially physical harm through malicious uses. The misuse is ongoing and has materialized harms, not just potential risks. Anthropic's warnings about the use of these distilled models for offensive cyber operations and disinformation campaigns further confirm the presence of realized or imminent harms. Hence, this event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Anthropic accuses Deepseek, Moonshot, and MiniMax of stealing Claude's AI data through 16 million queries

2026-02-23
The Decoder
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (Claude and the attacking labs' models) and their use/misuse. The large-scale distillation attack through millions of queries is a misuse of AI systems to steal proprietary AI outputs, which is a violation of intellectual property rights under the OECD framework. The harm is realized as the theft of AI-generated data, not merely a potential risk. Hence, it meets the criteria for an AI Incident due to breach of intellectual property rights caused by AI system misuse.
Thumbnail Image

Elon Musk accuse Anthropic d'avoir volé des données d'entraînement d'IA à "grande échelle" après qu'une entreprise soutenue par Amazon a accusé des rivaux chinois de copier.

2026-02-24
Benzinga France
Why's our monitor labelling this an incident or hazard?
The article involves AI systems explicitly (Anthropic's Claude model, AI training data, model distillation). The accusations relate to the development and use of AI systems, specifically unauthorized data use and large-scale model replication. However, no actual harm (injury, rights violation, infrastructure disruption, or environmental harm) is reported as having occurred. The concerns about security risks and potential misuse of distilled models indicate plausible future harm. Thus, the event fits the definition of an AI Hazard, as it plausibly could lead to an AI Incident but has not yet done so. It is not Complementary Information because the main focus is on the dispute and risks, not on responses or ecosystem updates. It is not Unrelated because AI systems and their misuse are central to the event.
Thumbnail Image

Anthropic beschuldigt DeepSeek, Moonshot AI und MiniMax, Claude zu kopieren

2026-02-23
Trending Topics
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (Anthropic's Claude and competing AI models) and describes the use (misuse) of AI outputs to train other models illicitly. The unauthorized distillation technique is a misuse of AI development and use, violating terms and export controls. While no direct harm is reported as having occurred, the article details credible risks that these illegally distilled models could be used for harmful purposes such as bioweapons, cyberattacks, and mass surveillance, which align with harms to health, human rights, and security. This fits the definition of an AI Hazard, as the event plausibly leads to AI incidents. The article's focus is on warning and exposing these risks rather than reporting an actual incident. Therefore, the classification is AI Hazard.
Thumbnail Image

Anthropic beschuldigt chinesische Firmen der KI-Spionage

2026-02-23
IT BOLTWISE® x Artificial Intelligence
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (Chatbot Claude and competing AI models) and describes the use and misuse of these systems leading to harm. The harm includes violation of intellectual property rights (unauthorized extraction of data from Claude) and ethical violations (manipulation of politically sensitive content to comply with censorship). Since these harms have occurred due to the development and use of AI systems, this qualifies as an AI Incident under the definitions provided.
Thumbnail Image

Anthropic beschuldigt chinesische Firmen der KI-Spionage

2026-02-23
IT BOLTWISE® x Artificial Intelligence
Why's our monitor labelling this an incident or hazard?
The event involves the use and misuse of AI systems, specifically the spying on an AI chatbot to improve other AI models, which is a misuse of AI technology. However, there is no direct or indirect evidence of actual harm occurring yet, such as violations of rights, health, or property damage. The event highlights potential risks and ethical concerns but does not describe a realized AI Incident. Therefore, it is best classified as Complementary Information, as it provides important context about AI misuse and competitive challenges without reporting a concrete AI Incident or AI Hazard.
Thumbnail Image

Anthropic指控多家中国公司滥用Claude训练AI模型 - cnBeta.COM 移动版

2026-02-23
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The article centers on allegations of unauthorized use of an AI system (Claude) for model distillation by other companies, which could plausibly lead to significant harms such as misuse in military, intelligence, misinformation, and surveillance contexts. Although the misuse is described as occurring, the harms are framed as potential and systemic risks rather than documented incidents causing direct harm at this time. Therefore, this event fits the definition of an AI Hazard, as it involves the development and use of AI systems in ways that could plausibly lead to AI Incidents, but no specific harm has yet been reported or confirmed.
Thumbnail Image

Anthropic CEO将与美国防部长会面 讨论AI模型在国防部的使用 - cnBeta.COM 移动版

2026-02-23
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The article involves AI systems (Anthropic's AI models) used by the Department of Defense, indicating AI system involvement. However, it does not describe any realized harm or incident caused by these AI systems, nor does it report a credible imminent risk of harm. Instead, it focuses on ongoing dialogue and negotiation about ethical use and deployment constraints, which fits the definition of Complementary Information as it provides context and updates on governance and societal responses to AI use in defense. Therefore, the event is best classified as Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

Chinese AI Firms Accused of Stealing Anthropic's Claude Chatbot Capabilities - News Directory 3

2026-02-24
News Directory 3
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the use of AI systems (Claude chatbot) and the illicit extraction of its capabilities by other AI firms, which is a direct violation of intellectual property rights, a recognized harm under the AI Incident definition. The large-scale and coordinated nature of the data extraction, the use of fraudulent accounts, and the circumvention of export controls indicate misuse of AI systems. Furthermore, the removal of safety guardrails in the distilled models introduces risks of misuse with national security implications, which is a significant harm. These factors collectively meet the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Distillation Attacks on Claude Are Real. So Is the Lobbying Campaign.

2026-02-23
Implicator.ai
Why's our monitor labelling this an incident or hazard?
The event involves the use and misuse of AI systems (Claude and derivative models) to extract proprietary model capabilities through fraudulent means, which constitutes a violation of intellectual property rights and poses national security risks. The large scale of the distillation campaigns and the use of sophisticated proxy networks indicate a direct role of AI systems in causing harm. The article documents realized harm (unauthorized extraction and use of AI capabilities) rather than just potential harm. Therefore, it meets the criteria for an AI Incident rather than an AI Hazard or Complementary Information. The policy and governance context supports the significance of the harm caused.
Thumbnail Image

Detecting and preventing distillation attacks

2026-02-23
anthropic.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (Claude and the illicitly distilled models) and describes the use and misuse of these AI systems to extract capabilities illicitly. The harms include violation of intellectual property rights (unauthorized use of Claude's capabilities), and significant risks to national security and human rights due to the proliferation of unprotected AI models that can be used for malicious purposes such as cyberattacks and surveillance. These harms are materialized and ongoing, not merely potential. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Anthropic Exposes 16M Query Theft Campaign by Chinese AI Labs

2026-02-23
blockchain.news
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (Anthropic's Claude model and the Chinese labs' AI systems) and describes malicious use of AI capabilities to steal proprietary AI model functions through fraudulent API queries. This misuse directly leads to violation of intellectual property rights, a recognized harm under the AI Incident definition. Additionally, the stolen models lacking safeguards could facilitate further harms such as cyberattacks and disinformation campaigns, reinforcing the incident classification. The detailed description of the attacks, their scale, and the resulting harm confirms this is not merely a potential risk but an actual incident involving AI misuse and harm.
Thumbnail Image

Anthropic Claims DeepSeek, Chinese Firms Use Claude for AI Training

2026-02-24
El-Balad.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Anthropic's Claude) and its unauthorized use by other companies to create distilled AI models without safety measures. The misuse involves large-scale fraudulent interactions and the creation of AI models that could be used for harmful purposes such as surveillance and disinformation, which are violations of human rights and could harm communities. However, the article does not report that these harms have already materialized; rather, it highlights the plausible future risks and calls for preventive actions. Thus, the event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because it clearly involves AI systems and their misuse with potential for harm.
Thumbnail Image

What did Anthropic accuse Chinese AI labs of?

2026-02-24
AllToc
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (Anthropic's Claude model) and the misuse of its outputs by other AI labs to train competing models. The misuse involves automated, large-scale prompting via fake accounts, which is a form of abuse of the AI system's intended use. While the article does not describe direct harm such as injury or rights violations, the misappropriation of model outputs and violation of terms of service represent a breach of intellectual property rights and contractual obligations. Therefore, this event constitutes an AI Incident due to violation of intellectual property rights and terms of service abuse resulting from the AI system's misuse.
Thumbnail Image

Why did Anthropic accuse Chinese AI labs?

2026-02-24
AllToc
Why's our monitor labelling this an incident or hazard?
The event describes the use and misuse of AI systems (Claude chatbot and rival models) through automated querying to extract model behavior and data, which is a form of AI system use and potential misuse. The alleged activity could lead to violations of intellectual property rights and commercial harm, which fits within the scope of AI Incident definitions if harm were realized. However, since the article states that it is still unclear whether the accused labs have produced outputs materially replicating Claude's capabilities and no formal regulatory findings exist, the harm is not yet confirmed or realized. Therefore, the event is best classified as an AI Hazard, reflecting a credible risk of harm due to misuse of AI systems and potential intellectual property violations, rather than an AI Incident. The event is more than complementary information because it reports a specific accusation involving AI system misuse with potential legal and commercial consequences.
Thumbnail Image

Anthropic flags Chinese models for stealing

2026-02-24
The Deep View
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems being used to carry out large-scale unauthorized model distillation attacks, which is a misuse of AI technology leading to intellectual property violations. The harm is realized as Anthropic's proprietary model capabilities are being stolen and replicated without authorization, which breaches legal and ethical obligations protecting intellectual property rights. The involvement of AI systems in both the attack and the defense measures confirms the AI system's role in the incident. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Chinese AI firms accused of illicitly copying Claude model

2026-02-24
The Sun Malaysia
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (Claude chatbot and derivative models) and details misuse of the AI system outputs to train other models without authorization, breaching terms of service and regional access rules. This misuse directly leads to violations of intellectual property rights, which is a breach of obligations under applicable law protecting intellectual property rights, thus meeting the criteria for an AI Incident. Additionally, the event highlights ongoing active campaigns and potential national security risks, reinforcing the harm dimension. Although no physical harm is described, the violation of intellectual property rights and the risk of unsafe AI models spreading are significant harms under the framework.
Thumbnail Image

Chinese AI Startups MiniMax, DeepSeek, Moonshot Face Distillation Accusations, Peer Zhipu Hit by GPU Crisis

2026-02-24
yicaiglobal.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (Claude models and competing AI models) and describes the use of AI systems to conduct distillation attacks, which is a form of intellectual property theft. This constitutes a violation of intellectual property rights, a recognized category of AI harm. The harm is realized as Anthropic's models' capabilities were extracted without authorization, directly impacting their commercial interests. The operational challenges faced by Zhipu AI, while significant, do not themselves constitute harm but provide context. Hence, the main event is an AI Incident due to the direct harm caused by the distillation attacks.
Thumbnail Image

Anthropic Alleges Massive Distillation Campaign by Chinese AI Labs, Escalating Fight Over Chips and Safeguards - Tekedia

2026-02-24
Tekedia
Why's our monitor labelling this an incident or hazard?
The event involves the use and misuse of AI systems: Anthropic's Claude AI system is being queried at scale by fake accounts to replicate its capabilities, which is a direct use of AI systems leading to harm. The harm includes violation of intellectual property rights through unauthorized distillation and replication of AI capabilities, which is a breach of obligations under applicable law protecting intellectual property. Additionally, the article highlights risks related to the removal of safeguards in replicated models, which could lead to misuse and security threats, further supporting the classification as an AI Incident. The scale and coordination of the campaign indicate a significant and realized harm rather than a mere potential risk, distinguishing it from an AI Hazard or Complementary Information.
Thumbnail Image

Anthropic Accuses Chinese Ai Labs Of Mining Claude As Us Debates Ai Chip Exports

2026-02-23
Breaking News, Latest News, US and Canada News, World News, Videos
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (Claude AI and derivative models) and their unauthorized use through distillation, which is a development and use-related issue. While direct harm is not reported, the unauthorized cloning of AI models constitutes a violation of intellectual property rights, and the potential misuse of these models by authoritarian regimes for cyberattacks and disinformation campaigns represents a plausible future harm. The event also relates to ongoing debates about AI chip export controls, which are intended to mitigate such risks. Since the harms are plausible and the event centers on the risk of these harms rather than confirmed incidents, it is best classified as an AI Hazard.
Thumbnail Image

Illegale KI-Destillation: Eine Bedrohung für die Sicherheit

2026-02-23
IT BOLTWISE® x Artificial Intelligence
Why's our monitor labelling this an incident or hazard?
The event involves the misuse of AI systems through illegal distillation, which directly threatens security and national safety by enabling the spread of unprotected AI models that could be used maliciously. Although no specific harm event is described as having already occurred, the article clearly outlines significant and credible risks of harm stemming from these practices, including potential military and cyberattack applications. Therefore, this constitutes an AI Hazard, as the illegal distillation could plausibly lead to serious AI incidents involving harm to security and communities. The article also mentions mitigation efforts but focuses primarily on the threat and risks posed by these illegal activities rather than on completed harm or responses alone.
Thumbnail Image

曝三家中国AI通过"蒸馏攻击"非法提取 Claude 模型的能力来训练自家模型_手机网易网

2026-02-24
m.163.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (Claude and the competing AI models trained via distillation). The unauthorized use of Claude through fake accounts to extract its capabilities constitutes misuse of the AI system. The harm includes violation of intellectual property rights (unauthorized extraction and use of proprietary AI capabilities) and potential harm to national security due to the spread of unprotected AI capabilities that could be used maliciously. These harms have already occurred or are ongoing, as the distillation has been performed and models trained. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Anthropic称月之暗面、Minimax和 DeepSeek"工业级蒸馏"其AI模型_手机网易网

2026-02-24
m.163.com
Why's our monitor labelling this an incident or hazard?
The event clearly involves AI systems, specifically the Claude model and competing AI models trained via distillation. The misuse of AI through fraudulent accounts and proxy services to extract model capabilities constitutes a direct misuse of AI systems. The harm is indirect but significant, including violations of security and potential human rights through enabling malicious applications. The event meets the criteria for an AI Incident because the misuse has already occurred, causing realized harm in terms of security risks and violation of service terms. The detailed description of the harm, the scale of the unauthorized activity, and the security implications justify classification as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Anthropic wirft chinesischen KI-Laboren Datenklau vor

2026-02-23
IT BOLTWISE® x Artificial Intelligence
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (Claude and the Chinese labs' models) and describes the unauthorized use of AI (via fake accounts and distillation) to extract and replicate AI capabilities. This constitutes a violation of intellectual property rights, which is a breach of obligations under applicable law protecting intellectual property rights, thus meeting the criteria for an AI Incident. Additionally, the article discusses potential harms to national security and safety due to the lack of protective measures in the distilled models, further supporting the classification as an AI Incident. The involvement is through misuse of AI systems and their development, leading to realized and potential harms.
Thumbnail Image

Anthropic Claude Under Large Scale Distillation Attacks By Chinese AI Labs with 13 Million Exchanges

2026-02-23
Cyber Security News
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (Anthropic's Claude and the distillation technique) and describes malicious use of AI (distillation attacks) to steal capabilities. Although no direct harm has yet occurred, the unauthorized replication and potential uncontrolled dissemination of advanced AI models without safety safeguards plausibly could lead to significant harms, including misuse in sensitive domains like bioweapons or cyber operations. The event does not describe realized harm but highlights a credible risk and ongoing malicious activity. Hence, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Anthropic Says It Blocked Attempts to Distill Its AI Models by Rival Labs

2026-02-24
Gadgets 360
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (Anthropic's models and the rival labs' models) and details malicious use of AI outputs to replicate proprietary AI capabilities without authorization. This unauthorized distillation constitutes a violation of intellectual property rights, which is a form of harm under the AI Incident definition (c). The harm is realized as the rival labs have conducted millions of illicit queries to extract knowledge, and Anthropic's detection and blocking efforts confirm the misuse has occurred. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Anthropic Accuses Chinese AI Firms of Large-Scale Model Distillation - News Directory 3

2026-02-24
News Directory 3
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (Claude chatbot and derived models) and details how their outputs were illicitly used to train other AI models, constituting a breach of intellectual property rights, which is a recognized harm under the AI Incident definition. The large-scale, coordinated nature of the extraction and the potential for resulting unsafe AI systems capable of dangerous applications further supports classification as an AI Incident. Anthropic's active response to mitigate and detect such misuse confirms the realized harm and ongoing risk. Therefore, this is an AI Incident due to direct and indirect harm caused by the misuse of AI systems and violation of intellectual property rights.
Thumbnail Image

Anthropic Accuses DeepSeek, Moonshot And MiniMax Of Stealing Claude's Capabilities

2026-02-24
NDTV Profit
Why's our monitor labelling this an incident or hazard?
The event involves the use and misuse of AI systems (Claude and derived models) in a manner that breaches terms of service and intellectual property rights, which is a violation of legal and fundamental rights. The large-scale unauthorized extraction of AI capabilities and the potential for these capabilities to be used in harmful ways (e.g., military or surveillance applications without safeguards) represent significant harms. Anthropic's detection of millions of fraudulent interactions indicates that the misuse is ongoing and has already caused harm by violating rights and potentially enabling unsafe AI applications. Thus, the event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Anthropic accuses Chinese AI firms of illicit Claude model exploitation

2026-02-24
storyboard18.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (Claude model and derivative AI systems) and their use in a manner alleged to infringe intellectual property rights, which is a recognized category of AI harm. The large-scale querying and extraction of capabilities from Claude by the Chinese firms directly led to the alleged harm of intellectual property violation. Although the article mentions potential future risks, the primary harm of IP infringement is already occurring. Hence, this qualifies as an AI Incident rather than a hazard or complementary information. The involvement of AI systems and the direct link to harm through illicit use and IP violation justifies this classification.
Thumbnail Image

Anthropic wirft China-KIs massenhaften Technologie-Klau vor

2026-02-24
WinFuture.de
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (the Claude model and Chinese competitor models trained via distillation). The use of distillation to copy AI capabilities without authorization constitutes misuse of AI development and use. The article highlights potential harms including loss of security mechanisms and misuse for bioweapons or cyberattacks, which are serious harms under the framework. However, the article does not report actual realized harm or incidents but warns of plausible future harms and security risks. Thus, it fits the definition of an AI Hazard rather than an AI Incident. It is not Complementary Information because it is not an update or response to a previously reported incident but a new report of alleged misuse. It is not Unrelated because it clearly involves AI systems and potential harms.
Thumbnail Image

Anthropic Accuses China's DeepSeek Of Using Its Data To Train AI; Elon Musk Says Look Who's Talking

2026-02-24
News18
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems and their misuse through 'distillation attacks' to extract proprietary data, which relates to intellectual property rights. However, it focuses on accusations and warnings rather than confirmed incidents of harm or legal outcomes. The mention of Elon Musk's counter-accusations and Google's report on potential risks further supports that this is an ongoing discourse about AI risks and ecosystem dynamics. Since no direct or indirect harm has been confirmed or realized yet, and the main focus is on the discussion and warnings about these practices, the event fits the definition of Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

Anthropic joins OpenAI in flagging 'industrial-scale' distillation campaigns by Chinese AI firms

2026-02-24
CNBC
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems: Anthropic's Claude (a large AI model) and the Chinese firms' AI models trained via distillation. The misuse of Claude through fraudulent accounts and proxy services to extract knowledge for training competing models is a direct violation of intellectual property rights, which is a recognized harm under the framework. The large scale of the operation (millions of exchanges) indicates significant impact. Therefore, this qualifies as an AI Incident due to the realized harm of rights violations caused by the AI system's misuse.
Thumbnail Image

Anthropic acusa a tres empresas chinas de realizar ataques de destilación contra Claude para mejorar sus modelos

2026-02-24
La Nacion
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (Claude and the accused companies' models) and details a misuse of AI technology (model distillation attacks) that directly infringes on intellectual property rights and undermines lawful competitive practices. This constitutes a violation of intellectual property rights, which is a recognized category of AI harm. The harm is realized, not just potential, as the attacks have already occurred with millions of interactions. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Why is Anthropic accusing Chinese AI labs over distillation attacks

2026-02-25
The Hindu
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (Anthropic's Claude model and the distillation process involving AI models). The misuse of AI systems (via massive querying and proxy networks) has directly led to the theft of proprietary AI capabilities, which is a violation of intellectual property rights, a recognized category of AI harm. The event describes actual ongoing harm, not just potential risk, and involves the use and misuse of AI systems. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Après OpenAI, la start-up Anthropic accuse Deepseek et deux autres IA chinoises d'avoir pillé son chatbot Claude

2026-02-24
BFMTV
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems: Anthropic's Claude chatbot and the Chinese AI models being trained via distillation attacks. The misuse of AI (distillation attacks using fraudulent accounts to extract model capabilities) directly leads to a breach of intellectual property rights, which is a recognized harm under the AI Incident definition. The harm is realized, not just potential, as the unauthorized training and copying of AI capabilities is ongoing or has occurred. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

L'arte della guerra artificiale: il caso gli agenti spia cinesi accusati di copiare i modelli di Antropic

2026-02-24
Fanpage
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (large language models and their distillation) and discusses the unauthorized use of AI technology, which constitutes a violation of intellectual property rights. However, the harm described is potential rather than realized, focusing on plausible future risks such as AI-enabled cyberattacks or biological weapons development. Since no actual incident of harm has been reported but credible risks are highlighted, the event fits the definition of an AI Hazard rather than an AI Incident. It is not merely complementary information because the main focus is on the potential for harm and illicit AI use, not on responses or ecosystem updates. It is not unrelated because AI systems and their misuse are central to the narrative.
Thumbnail Image

En la guerra de las IA, vale todo: Anthropic acusa a Deepseek y otras empresas chinas de crear copias ilícitas de Claude

2026-02-24
El Español
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (Claude and the alleged copied models) and their development and use. The unauthorized 'distillation' process directly relates to the use of AI outputs to train other AI models without permission, constituting a breach of intellectual property rights (harm category c). Additionally, the copied models may lack safety safeguards, posing risks of misuse, which is a significant harm. The prior lawsuit against Anthropic for training on pirated books further confirms the relevance of intellectual property violations in this context. Therefore, the event meets the criteria for an AI Incident due to realized harm through intellectual property violations and potential safety harms from the copied models.
Thumbnail Image

DeepSeek, Moonshot, and MiniMax made 'industrial-scale' distillation attack to copy Claude, accuses Anthropic

2026-02-24
The Financial Express
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (Claude and the copied models) and describes a large-scale misuse of AI outputs to replicate technology without authorization. The copied models' lack of safety mechanisms increases the risk of harmful outputs and malicious applications, which fits the definition of plausible future harm (AI Hazard). There is no direct evidence in the article that harm has already occurred, so it does not meet the threshold for an AI Incident. The event is more than just complementary information because it reports a significant security and safety threat involving AI misuse. Hence, the classification as AI Hazard is appropriate.
Thumbnail Image

Guerra abierta en el mercado de la IA: Anthropic acusa a compañías chinas como Deepseek de servirse de Claude para entrenar sus modelos

2026-02-24
La Razón
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (Claude and other LLMs) and their misuse in training competing AI models through unauthorized distillation. This misuse has directly led to harms including intellectual property rights violations and potential national security risks, as the extracted capabilities could be incorporated into military and surveillance systems without safeguards. The harm is indirect but significant and clearly articulated, meeting the criteria for an AI Incident. Although the article also mentions the broader context of AI training on copyrighted materials, the primary focus is on the misuse of Claude and the resulting harms, not general AI development practices. Hence, the classification is AI Incident.
Thumbnail Image

La IA ya es un campo de batalla: Anthropic acaba de acusar a DeepSeek y a otras empresas chinas de "destilar" Claude

2026-02-24
Xataka
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (Claude and derivative models) and concerns their misuse through large-scale unauthorized extraction of capabilities, which is a form of misuse of AI systems. While no direct harm is reported, the article emphasizes the potential for serious future harms, including misuse for harmful applications and violation of intellectual property rights. The large-scale, coordinated nature of the activity and the geopolitical context underline credible risks. Since harm is not yet realized but plausible, this fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Streit um KI-Diebstahl: Anthropic wirft chinesischen Firmen massive Urheberrechtsverletzung vor

2026-02-24
20 Minuten
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (Anthropic's Claude and competing Chinese AI models) and discusses the use and misuse of AI training data, including alleged copyright infringement. The harm is a violation of intellectual property rights, which is a recognized category of AI Incident harm. The event describes realized harm (illegal copying and training on protected works) and ongoing legal disputes, not just potential future harm. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Anthropic alerta de un fraude de IA china: DeepSeek, Moonshot AI y MiniMax, señaladas por "destilar" Claude

2026-02-24
El Periódico
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (Claude and the accused AI labs' models) and details the illicit use of AI development techniques (model distillation) that violate intellectual property rights and terms of service. The harm includes violation of intellectual property rights and potential risks to security and public safety due to the lack of safeguards in the illicit models. Since the harm is realized (the illicit distillation has occurred) and the risks are significant, this qualifies as an AI Incident under the framework, specifically under violations of intellectual property rights and potential harm to communities and security.
Thumbnail Image

Are Chinese AI labs cheating? US AI giant Anthropic alleges some are

2026-02-24
CNN International
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (large language models like Claude) and discusses their unauthorized use (distillation) by other AI labs, which is a misuse of AI system development and use. The harms described include potential cybercrimes, bio-weapons, disinformation, and mass surveillance, which are serious and plausible future harms. Since the article does not confirm that these harms have already occurred but warns of credible risks and national security concerns, the event fits the definition of an AI Hazard rather than an AI Incident. It is not merely complementary information because the main focus is on the potential misuse and associated risks, not on responses or updates. It is not unrelated because the event is directly about AI systems and their misuse.
Thumbnail Image

Anthropic accusa rivali cinesi di aver usato il suo chatbot

2026-02-24
www.Bluewin.ch
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (Claude chatbot) and its outputs being exploited by others through a large-scale automated process (fake accounts) to train competing AI models. While no direct harm is reported as having occurred yet, the article emphasizes plausible future harms including misuse by authoritarian regimes for cyberattacks, disinformation, and surveillance, which align with violations of rights and harm to communities. Therefore, this situation fits the definition of an AI Hazard, as it plausibly could lead to an AI Incident but has not yet caused realized harm according to the article.
Thumbnail Image

Anthropic says it has identified thousands of 'fraudulent accounts' taking Claude and 'extracting its capabilities to train and improve their own models'

2026-02-24
pcgamer
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Claude) and describes unauthorized extraction of its capabilities by fraudulent accounts to train other models, which is a misuse of the AI system. The harms discussed relate to intellectual property rights violations and potential misuse by foreign military and intelligence systems, which are plausible future harms. No direct or indirect harm has been reported as having already occurred, such as legal consequences or operational disruptions caused by these attacks. The focus is on the detection of the attacks and the call for coordinated action to address them, indicating a credible risk rather than a realized incident. Hence, the event is best classified as an AI Hazard.
Thumbnail Image

Anthropic vs China: The grand AI heist is a hall of mirrors

2026-02-24
Economic Times
Why's our monitor labelling this an incident or hazard?
The article involves AI systems (large language models) and discusses their development and use, including alleged misuse (distillation attacks) and legal disputes over data usage. The harms described relate to intellectual property violations and national security concerns, which fall under the AI Incident harm categories. However, the article does not report a specific AI Incident where harm has directly or indirectly occurred; rather, it reports accusations, ongoing investigations, and legal challenges. There is no clear evidence of realized harm or disruption caused by AI systems at this stage, only potential or alleged harms. The article also covers broader geopolitical and corporate dynamics, making it primarily a source of complementary information that enhances understanding of the AI ecosystem and its risks. Hence, the classification as Complementary Information is appropriate.
Thumbnail Image

Deep Steal

2026-02-24
The Times of India
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly, specifically the use of AI model distillation to replicate proprietary AI models. The article describes the use of these AI systems in a way that could plausibly lead to harm, such as misuse of unsafeguarded AI models for malicious operations. However, no direct or indirect harm has been reported as having occurred yet. Therefore, this situation fits the definition of an AI Hazard, as it plausibly could lead to an AI Incident in the future if unauthorized distillation leads to harmful AI deployments. The article also discusses the need for improved defenses, but this is part of the hazard context rather than a complementary information update on a past incident.
Thumbnail Image

Elon Musk to Anthropic on its claim the Chinese aI models are stealing its data: You are guilty of ... - The Times of India

2026-02-24
The Times of India
Why's our monitor labelling this an incident or hazard?
The article involves AI systems and their development and use, specifically regarding training data sourcing and alleged intellectual property violations. However, it does not report a specific incident where harm has occurred or is imminent due to these AI systems. The harms discussed (e.g., intellectual property theft, bias) are mentioned in the context of allegations, lawsuits, and public criticism, without detailing a new or specific AI Incident or AI Hazard event. The focus is on the ongoing debate and public discourse, making this a case of Complementary Information that enhances understanding of the AI ecosystem and governance challenges rather than reporting a new harm or credible future harm event.
Thumbnail Image

Anthropic claims 3 Chinese companies ripped it off, using its AI tools to train their models: 'How the turn tables' | Fortune

2026-02-24
Fortune
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the use of Anthropic's AI system Claude by other companies to train their own AI models without authorization, involving millions of interactions through fraudulent accounts. This unauthorized use constitutes a breach of terms of service and export controls, which are legal obligations protecting intellectual property. The harm is realized as it undermines Anthropic's proprietary rights and business interests. The AI system's development and use are central to the incident, and the harm is direct and significant. Although there is broader context about industry practices and policy debates, the core event is a clear AI Incident involving intellectual property violation through AI misuse.
Thumbnail Image

Anthropic says Chinese AI labs stole data from Claude to train rival models

2026-02-24
The Jerusalem Post
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (Claude and rival AI models) and describes illicit use of AI outputs to train other AI models, which is a misuse of AI development and use. The harms include violation of intellectual property rights and the potential for serious harms such as offensive cyber operations and disinformation campaigns, which are direct or indirect harms to communities and national security. The event reports that these harms have already occurred through the illicit data extraction and model training, and the company is responding with defense measures. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Anthropic acusa empresas chinesas de copiarem modelo de IA Claude

2026-02-24
Notícias ao Minuto
Why's our monitor labelling this an incident or hazard?
The event involves AI systems and their development and use, specifically the unauthorized copying of AI capabilities. The potential for harm is clearly articulated by Anthropic, who warns that the copied models may lack critical safeguards, which could plausibly lead to significant harms such as malicious cyber activities or weapon development. Since no actual harm has been reported yet, but a credible risk of future harm is described, this qualifies as an AI Hazard rather than an AI Incident.
Thumbnail Image

Anthropic's allegations against Chinese firms expose AI training grey area

2026-02-24
South China Morning Post
Why's our monitor labelling this an incident or hazard?
The article focuses on allegations of unauthorized AI model training practices (knowledge distillation) that may infringe on intellectual property rights but does not describe any direct or indirect harm resulting from these actions. There is no indication of injury, disruption, or violation of rights that has materialized. The event primarily exposes a grey area in AI development practices and sparks debate, which fits the definition of Complementary Information as it provides context and updates on AI ecosystem issues without reporting a new incident or hazard.
Thumbnail Image

Anthropic acusa a tres empresas chinas de realizar ataques de...

2026-02-24
europa press
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (Anthropic's Claude and the accused companies' AI models) and describes a misuse of AI capabilities through illicit distillation attacks. This misuse has directly led to a violation of intellectual property rights, which is a recognized harm under the AI Incident definition. The large scale of fraudulent interactions and the deliberate extraction of model capabilities demonstrate a clear causal link between AI system misuse and harm. Hence, the event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Dona do Claude acusa DeepSeek e outras IAs chinesas de roubar dados

2026-02-24
TecMundo
Why's our monitor labelling this an incident or hazard?
The event describes the use and misuse of AI systems (language models) in a way that could plausibly lead to harms such as national security threats and malicious applications. The creation of mass accounts to extract data for training competing models constitutes an irregular and potentially illegal use of AI systems. However, the article does not report any realized harm or incident but warns of plausible future risks. Therefore, this qualifies as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Anthropic acusa tecnológicas chinesas de copiarem tecnologia de inteligência artificial

2026-02-24
Publico
Why's our monitor labelling this an incident or hazard?
The event involves the use and misuse of an AI system (Claude) by other AI labs to illicitly extract proprietary knowledge, which is a direct violation of intellectual property rights, a recognized harm under the AI Incident definition. Additionally, the creation of potentially unsafe AI models without proper safety measures poses a direct risk to security, which is a significant harm. The involvement of AI systems is explicit, and the harms are both realized (intellectual property theft) and ongoing (security risks). Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

La Jornada: Gigantes de IA de Estados Unidos acusan a rivales chinos de robo masivo de datos

2026-02-24
La Jornada
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (chatbots and AI models) and concerns the development and use of AI technology. The alleged illicit extraction of AI capabilities constitutes a violation of intellectual property rights, which is a breach of obligations under applicable law protecting intellectual property rights. Since the event describes ongoing or realized unauthorized use of AI technology leading to harm (theft of IP), it qualifies as an AI Incident under the framework, specifically under harm category (c) violations of intellectual property rights.
Thumbnail Image

"Hipócritas": Musk criticó denuncia de Anthropic por "robo de información" de China

2026-02-24
Montevideo Portal / Montevideo COMM
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (chatbots and AI models) and concerns the unauthorized use of AI-generated data to train other AI systems, which is a violation of intellectual property rights. This constitutes a breach of obligations under applicable law protecting intellectual property rights, fulfilling the criteria for an AI Incident. The dispute and accusations indicate realized harm related to intellectual property theft in AI development, not merely potential or speculative harm. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Anthropic accuses China's DeepSeek of plagiarizing Claude AI to advance censorship

2026-02-24
Notebookcheck
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (large language models) and their misuse through plagiarism and unauthorized replication of AI outputs to build derivative models lacking safeguards. This misuse leads to violations of intellectual property rights and raises concerns about censorship and malicious use, which constitute harm to rights and communities. Although direct physical harm is not described, the breach of intellectual property and the enabling of censorship and potential malicious use are significant harms under the framework. Therefore, this qualifies as an AI Incident due to realized harm from the AI system's misuse and its consequences.
Thumbnail Image

El feroz ciberataque que denunció una IA estadounidense sobre una IA china

2026-02-24
El Observador
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (Anthropic's Claude and other AI models) and describes misuse (distillation attacks) that infringe on intellectual property rights, which is a recognized harm category. However, the harm is indirect and more about competitive and legal concerns rather than immediate or realized harm to persons, infrastructure, or communities. The focus is on Anthropic's detection and mitigation strategies, as well as broader governance implications, rather than an incident causing direct or indirect harm. There is no indication of injury, rights violations to individuals, or disruption caused by the misuse, only the potential undermining of competitive advantage. Hence, it fits the definition of Complementary Information rather than AI Incident or AI Hazard.
Thumbnail Image

Anthropic denuncia "ataques de destilación" de IAs chinas para plagiar su modelo Claude

2026-02-24
Cooperativa
Why's our monitor labelling this an incident or hazard?
Anthropic's report describes a deliberate and large-scale misuse of their AI system Claude through illicit distillation techniques by other AI labs. This misuse involves fraudulent access and extraction of AI capabilities, leading to models without proper safety measures. The company explicitly warns about the risk of these models being used for dangerous purposes, including biological weapons and cyberattacks, which constitutes a plausible future harm. Since the harm is not yet realized but the risk is credible and significant, this event qualifies as an AI Hazard rather than an AI Incident. The AI system's misuse and the potential for serious harm align with the definition of an AI Hazard.
Thumbnail Image

Anthropic заявила, що китайська DeepSeek незаконно використала її ШІ-моделі

2026-02-24
ZN.UA
Why's our monitor labelling this an incident or hazard?
The event involves the use and misuse of AI systems (Anthropic's Claude model) by other entities (DeepSeek, MiniMax, Moonshot) through unauthorized access and large-scale data extraction (industrial distillation). This constitutes a violation of intellectual property rights, which is a recognized harm under the AI Incident definition. The harm is realized, not just potential, as millions of interactions via fake accounts have already occurred. The AI system's development and use are directly implicated in the incident. Hence, the event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Anthropic Furious at DeepSeek for Copying Its AI Without Permission, Which Is Pretty Ironic When You Consider How It Built Claude in the First Place

2026-02-24
Futurism
Why's our monitor labelling this an incident or hazard?
The event describes the unauthorized extraction and copying of AI model capabilities by querying the AI system at scale, which is a misuse of AI systems leading to violations of intellectual property rights. This fits the definition of an AI Incident because the development and use of AI systems have directly led to a breach of obligations intended to protect intellectual property rights. The event is not merely a potential risk but an ongoing issue with concrete actions taken by the accused firms, making it more than a hazard. It is not complementary information because the main focus is on the alleged illicit activity and its implications, not on responses or broader ecosystem context. Therefore, the classification is AI Incident.
Thumbnail Image

Anthropic accuse les laboratoires chinois de "piller" ses IA, tout internet se retourne contre elle

2026-02-24
Clubic.com
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (Anthropic's Claude models) and describes a coordinated misuse of these AI systems' outputs to train competing models by Chinese labs. This constitutes a violation of intellectual property rights, which is a recognized harm under the AI Incident definition (c). The event is not merely a potential risk but describes ongoing, large-scale extraction and misuse, thus qualifying as an AI Incident rather than a hazard. The involvement of AI systems in the development and use phases, and the direct link to harm (intellectual property theft and potential national security concerns), supports this classification.
Thumbnail Image

Amodei da Hegseth e Musk contro "misAnthropic". L'AI in fase "sicurezza nazionale"

2026-02-25
Il Foglio
Why's our monitor labelling this an incident or hazard?
The presence of AI systems is clear, specifically Anthropic's Claude model. The event involves the use and alleged misuse of AI systems (distillation of AI capabilities via fraudulent accounts). The article highlights potential risks to national security and competitive integrity, which could plausibly lead to harms such as intellectual property violations or strategic security breaches. However, no direct or indirect harm has yet materialized or been reported. The focus is on potential threats and strategic tensions rather than actual incidents. Hence, the classification as an AI Hazard is appropriate.
Thumbnail Image

Anthropic Accuses Three Firms of Using Sophisticated Distillation Attacks

2026-02-25
Cointelegraph
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (large language models) and their misuse through distillation attacks, which directly lead to intellectual property violations and potential geopolitical harms. The misuse is ongoing and has caused harm to Anthropic by unauthorized extraction of its AI capabilities. The event meets the criteria for an AI Incident because the AI system's use has directly led to a breach of intellectual property rights and presents risks of harm to communities and national security. Although some harms are potential (geopolitical risks), the intellectual property violation and large-scale unauthorized use are realized harms. Thus, the classification as AI Incident is appropriate.
Thumbnail Image

Geklaut? Schwere Vorwürfe gegen chinesische KI-Firmen wie Deepseek und Co.

2026-02-24
Business Insider
Why's our monitor labelling this an incident or hazard?
The event involves the misuse of AI systems (Claude) by competitors to extract proprietary model capabilities without authorization, which directly breaches intellectual property rights, a recognized harm under the AI Incident definition. The large-scale unauthorized use and the potential security risks linked to less secure models further support classification as an AI Incident. Although the article also discusses broader governance and export control issues, the primary focus is on realized harm through unauthorized AI system use and intellectual property violations, not just potential future harm or general AI ecosystem updates.
Thumbnail Image

Anthropic控中企大规模盗取数据 马斯克:贼喊捉贼 - 国际 - 带你看世界

2026-02-24
星洲日报
Why's our monitor labelling this an incident or hazard?
Anthropic's accusation involves the misuse of AI systems and data theft that directly breaches intellectual property rights, which is a recognized harm under the AI Incident definition (violation of intellectual property rights). The large-scale extraction of training logic and data through fake accounts constitutes a misuse of AI systems leading to harm. Although the article also mentions potential military use, the primary harm already realized is the unauthorized data theft and model copying. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

DeepSeek e rivais usaram dados do Claude para treinamento de IA, acusa Anthropic

2026-02-24
Canaltech
Why's our monitor labelling this an incident or hazard?
The event involves the use and misuse of an AI system (Claude) where the unauthorized data extraction directly contributes to the development of competing AI models without proper safeguards. This constitutes a violation of intellectual property rights and raises significant security risks, which are harms under the AI Incident definition. The direct link between the AI system's misuse and potential harms, including security threats, qualifies this as an AI Incident rather than a mere hazard or complementary information.
Thumbnail Image

Acusaciones de Anthropic a laboratorios chinos por minería de IA en medio de debates de exportación

2026-02-24
Cadena 3 Argentina
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (Claude and competing AI models) and the use of AI techniques (distillation) to copy and improve AI models illicitly. This constitutes a violation of intellectual property rights, a recognized harm under the framework. Additionally, the article discusses potential national security risks from proliferation of AI models without safeguards, which is a significant harm. The involvement of AI systems in the development and use phases is clear, and the harms are either realized (IP theft) or plausibly imminent (security risks). Thus, this qualifies as an AI Incident due to direct and indirect harms caused by AI system misuse and development.
Thumbnail Image

DeepSeek非法获取美AI模型能力 专家析危害 | deepseek | 深度求索 | Anthropic | 大纪元

2026-02-24
The Epoch Times
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (Anthropic's Claude and OpenAI's ChatGPT) and their unauthorized use through AI techniques (distillation). The unauthorized extraction of model capabilities constitutes a violation of intellectual property rights and commercial secrets, which is a breach of legal protections. Moreover, the lack of safety features in the distilled models and their potential deployment in military, intelligence, and surveillance contexts pose direct risks to national security and societal harm. These harms are either occurring or highly plausible and are directly linked to the AI systems' development and use. Hence, the event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

DeepSeek非法獲取美AI模型能力 專家析危害 | deepseek | 深度求索 | Anthropic | 大紀元

2026-02-24
The Epoch Times
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (large language models Claude and ChatGPT) and the unauthorized use of AI techniques (model distillation) to extract capabilities, which is a misuse of AI development and use. The harm includes violation of intellectual property rights and significant national security risks, including potential harm to communities through cyberattacks and misinformation. The article reports that these harms are occurring or have occurred, not just potential risks. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Anthropic accuse la Chine de vol : comprendre l'attaque par distillation - Numerama

2026-02-24
Numerama.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (large language models) and describes the misuse of AI outputs to create derivative models without authorization, constituting a violation of intellectual property rights and potentially enabling harmful applications. The harm is realized in terms of rights violations and potential indirect harm to security and surveillance contexts. Therefore, this qualifies as an AI Incident due to the direct link between AI system misuse and harm (violation of intellectual property rights and potential misuse in military/intelligence). The article also details mitigation responses but the primary focus is on the incident itself.
Thumbnail Image

DeepSeek And Moonshoot Accused of Stealing from Anthropic's Claude Chatbot

2026-02-24
ProPakistani
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (Claude chatbot and other AI models) and describes misuse of AI outputs through distillation attacks, which is a form of intellectual property theft. The harm is realized as violations of intellectual property rights and legal challenges, meeting the definition of an AI Incident. The involvement is through misuse of the AI system's outputs and unauthorized training data use, directly leading to harm. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Anthropic acusa a tres firmas chinas de IA de "extraer ilícitamente" datos de su modelo

2026-02-24
Bolsamania
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems: Anthropic's Claude model and the accused firms' AI models. The misuse is the illicit extraction of model capabilities via large-scale automated querying using fraudulent accounts, which is a misuse of the AI system's outputs. This misuse directly leads to a violation of intellectual property rights, a recognized harm under the AI Incident definition. Although no physical harm or disruption is reported, the breach of obligations under applicable law protecting intellectual property rights is sufficient to classify this as an AI Incident. The event is not merely a potential risk or a complementary update but a concrete misuse causing harm.
Thumbnail Image

Anthropic wirft chinesischen Rivalen Abkupfern bei KI-Modell vor

2026-02-24
DER STANDARD
Why's our monitor labelling this an incident or hazard?
The article involves AI systems explicitly (chatbots and AI model distillation). The alleged activity is the use of AI to clone another AI's capabilities through large-scale interactions with fake accounts. While this could constitute a violation of intellectual property rights or unfair competitive practices, the article does not indicate that such harm has been legally established or has directly occurred. Therefore, this situation represents a plausible risk of harm related to AI development and use, but no concrete incident of harm is reported. Hence, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

美国AI公司指责中国公司 窃取数据侵犯知识产权

2026-02-24
東方網 馬來西亞東方日報
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (Claude chatbot and derived models) and describes the use of AI techniques (distillation) to illicitly extract capabilities, constituting intellectual property theft, which is a violation of intellectual property rights (harm category c). Additionally, the article highlights plausible future harms such as misuse for cyberattacks or biological weapons development, which are significant harms. Since the harm (intellectual property theft) has already occurred and the AI system's misuse is central, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

DeepSeek under fire: Anthropic accuses Chinese AI firm of misusing Claude for unauthorized model training

2026-02-24
The News International
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (Anthropic's Claude and other AI models) and their misuse by other AI firms through unauthorized data extraction and model training. The misuse constitutes a violation of intellectual property rights, which is a breach of obligations under applicable law protecting intellectual property rights, fitting the definition of an AI Incident. Although direct physical harm or injury is not reported, the unauthorized use and data siphoning represent realized harm. The mention of national security threats further underscores the seriousness of the incident. Hence, the event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Anthropic accuse trois entreprises chinoises d'exploiter Claude pour améliorer leurs IA

2026-02-24
Boursier.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use and misuse of an AI system (Claude) by other AI entities to illicitly extract capabilities for their own model development. While no direct harm has been reported, the article highlights credible security risks and the potential for these distilled models to cause harm if uncontrolled, such as bypassing safety measures and spreading freely without governance. This fits the definition of an AI Hazard, as the misuse could plausibly lead to significant harms in the future. It is not an AI Incident because no realized harm is described, nor is it Complementary Information or Unrelated since the focus is on a specific misuse scenario with potential harm.
Thumbnail Image

Anthropic指控DeepSeek等中國AI業者以2.4萬詐欺帳戶蒸餾Claude能力

2026-02-24
iThome Online
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (Claude and other AI models) and describes unauthorized use of AI outputs to train other AI systems, which is a violation of intellectual property rights. The misuse is realized and ongoing, involving large-scale fraudulent access and extraction of AI capabilities. This constitutes a breach of obligations under applicable law protecting intellectual property rights, fitting the definition of an AI Incident. The event also details responses and mitigation efforts by Anthropic, but the primary focus is on the incident of unauthorized AI model distillation and misuse, not just complementary information.
Thumbnail Image

Kampf um KI-Vorherrschaft: Deepseek und Co. zapfen offenbar US-Sprachmodelle ab

2026-02-24
Berner Zeitung
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (large language models like Claude) and discusses their unauthorized use and data misuse leading to intellectual property violations and potential security harms. The misuse of AI models for distillation without permission constitutes a breach of legal and ethical obligations, fulfilling the criteria for an AI Incident under violations of intellectual property rights and potential harm to communities and national security. The involvement of AI in these harms is direct and significant, and the article reports realized harms (legal actions, unauthorized data use) and credible risks (security threats). Thus, the classification as AI Incident is justified.
Thumbnail Image

中AI三巨頭遭爆盜取數據! Anthropic控用2.4萬假帳號蒸餾Claude模型技術 | 科技 | Newtalk新聞

2026-02-24
新頭殼 Newtalk
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (Claude model and the accused companies' AI models) and describes the misuse of AI through systematic data theft and model distillation. The harm includes violation of intellectual property rights and the potential for serious harm if the stolen models, lacking safety guardrails, are deployed in harmful contexts such as military or surveillance. These factors meet the criteria for an AI Incident, as the AI system's misuse has directly led to intellectual property violations and poses risks of harm to communities and international order.
Thumbnail Image

Anthropic accusa le cinesi DeepSeek, Moonshot e MiniMax: 16 milioni di richieste a Claude per 'distillare' il modello

2026-02-24
Hardware Upgrade - Il sito italiano sulla tecnologia
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Claude) and describes unauthorized use and misuse of this AI system's outputs to create derivative models without permission. This constitutes a violation of intellectual property rights and potentially breaches legal export controls, which are harms under the AI Incident definition (c). The large-scale illicit activity has already occurred, indicating realized harm rather than just potential. Anthropic's mitigation efforts and sharing of indicators are responses but do not negate the incident classification. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Anthropic Alleges Chinese AI Firms Copied Claude Data, Internet Pushes Back

2026-02-24
The Hans India
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (Claude and competing AI models) and discusses the use and alleged misuse of AI training data. The core issue is the alleged unauthorized data extraction (distillation) from an AI system, which could lead to violations of intellectual property rights and potential safety risks if models lack safeguards. However, the article does not report any confirmed harm, legal outcomes, or incidents resulting from this alleged behavior. Instead, it focuses on the accusation, public reactions, and the broader competitive and regulatory environment. This fits the definition of Complementary Information, as it provides context and updates on AI ecosystem dynamics and governance challenges without describing a concrete AI Incident or AI Hazard.
Thumbnail Image

China lanza ataque a gran escala para robar los secretos de la IA de EUA

2026-02-24
ADN40
Why's our monitor labelling this an incident or hazard?
The event involves the use and misuse of AI systems (Anthropic's Claude and the attackers' AI models) in a way that directly leads to intellectual property theft, which constitutes a violation of intellectual property rights under applicable law. The attack caused harm by stealing proprietary AI capabilities, which is a clear breach of legal protections and harms the victim company and potentially the broader AI ecosystem. Therefore, this qualifies as an AI Incident due to realized harm linked to AI system misuse and development.
Thumbnail Image

马斯克抨击Anthropic 双重标准争议

2026-02-25
中华网科技公司
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (Claude chatbot and competing AI models) and centers on alleged misuse of AI outputs and data to train other AI models, which constitutes a violation of intellectual property rights, a recognized harm under the AI Incident definition. The dispute includes claims of large-scale unauthorized data extraction and use, which directly relates to breaches of legal and ethical obligations in AI development and use. Therefore, this qualifies as an AI Incident due to violations of intellectual property rights and associated harms arising from AI system misuse and development practices.
Thumbnail Image

L'illusione dell'IA cinese low-cost: Anthropic accusa DeepSeek e MiniMax di aver clonato Claude

2026-02-24
ScenariEconomici.it
Why's our monitor labelling this an incident or hazard?
The event involves the use and misuse of AI systems—specifically, the alleged unauthorized extraction of data from Anthropic's Claude AI system to train rival AI models. This constitutes a violation of intellectual property rights, which is explicitly listed as a type of harm under AI Incidents. The harm is realized (not just potential) because the data extraction and training have already occurred, and the article discusses the competitive and legal consequences. Thus, the event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Musk slams Anthropic as Claude-maker accuses Chinese firms of theft

2026-02-24
NewsBytes
Why's our monitor labelling this an incident or hazard?
The article centers on allegations and public disputes about AI data theft and misuse, which are serious concerns related to AI development and intellectual property rights. However, there is no indication that these accusations have led to confirmed legal outcomes, harm, or incidents at this time. Therefore, the event represents a plausible risk and ongoing ethical debate rather than a confirmed AI Incident. It also does not primarily focus on responses, policy changes, or broader ecosystem updates, so it is not Complementary Information. Hence, it is best classified as an AI Hazard, reflecting the credible potential for harm related to AI data misuse and intellectual property violations.
Thumbnail Image

Anthropic Accuses Chinese AI Firms of Mass Data Harvesting as US Confirms DeepSeek Used Restricted Nvidia Chips | 📲 LatestLY

2026-02-24
LatestLY
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (Anthropic's Claude models and AI hardware chips) and details the misuse of these systems through large-scale data harvesting and unauthorized hardware use. The harm includes violation of intellectual property rights (unauthorized extraction of AI capabilities) and breach of export control laws, which are legal obligations. The involvement of AI systems in these harms is direct and central to the incident. Therefore, this qualifies as an AI Incident due to realized harm from AI system misuse and legal violations.
Thumbnail Image

China roba IA de Estados Unidos: Anthropic denuncia ataque masivo - La Opinión

2026-02-24
La Opinión Digital
Why's our monitor labelling this an incident or hazard?
The event involves the use and misuse of an AI system (Claude) through illicit means (fraudulent accounts and proxy networks) to extract its capabilities, which is a direct involvement of AI systems. The harm includes violation of intellectual property rights and the plausible and likely realized risk of harm to communities and national security through the deployment of unfiltered AI models for military and surveillance purposes. The article describes actual attacks and unauthorized use, not just potential risks, thus constituting an AI Incident rather than a hazard or complementary information. The involvement of AI is explicit, and the harms are both realized (the attack and extraction) and potentially severe (military and surveillance misuse).
Thumbnail Image

Are China's 'AI tigers' cheating? US rival Anthropic alleges some are

2026-02-24
NewsChannel 3-12
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (Claude and the Chinese AI models) and their development and use. The alleged illegal distillation constitutes a violation of intellectual property rights, which is a breach of obligations under applicable law protecting intellectual property rights, thus meeting the criteria for harm (c). Additionally, the potential misuse of these models for cybercrimes, disinformation, and surveillance indicates possible harm to communities and national security. Since the article reports ongoing or past illicit use leading to these harms and raises concerns about realized violations and risks, this qualifies as an AI Incident rather than a mere hazard or complementary information.
Thumbnail Image

【禁聞】美AI公司控DeepSeek等中企「抄襲」 | 美國AI公司 | 蒸餾技術 | 新唐人电视台

2026-02-24
www.ntdtv.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (advanced language models like Claude) and their unauthorized use by other AI companies through systematic data extraction using fake accounts. This misuse directly leads to intellectual property violations and unfair competition, which are harms under the AI Incident definition. Additionally, the potential use of stolen AI capabilities in military and surveillance contexts poses a serious risk to national security, further supporting the classification as an AI Incident. The involvement is in the use (misuse) of AI systems, and the harms are both realized (IP theft) and potentially severe (national security). Hence, the event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

The Generative AI paradox : Data, Ownership and the distillation dispute

2026-02-24
KalingaTV
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (Anthropic's Claude model and competing AI labs' models) and concerns the use and potential misuse of AI outputs for model distillation. However, the article does not describe any direct or indirect harm that has occurred due to this activity, such as injury, rights violations, or disruption. Instead, it focuses on allegations, strategic concerns, and broader intellectual property debates, which represent plausible risks and systemic challenges rather than concrete incidents. Therefore, this qualifies as Complementary Information, providing context and updates on AI ecosystem challenges rather than reporting an AI Incident or an immediate AI Hazard.
Thumbnail Image

Anthropic accuse DeepSeek de pillage technologique : les dessous d'une cyberguerre de l'IA - ZDNET

2026-02-24
ZDNet
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (large language models like Claude) and their unauthorized use by other AI actors to train competing models, which is a direct violation of intellectual property rights (a breach of obligations under applicable law). The large-scale automated querying and creation of fraudulent accounts to bypass restrictions indicate misuse of AI systems leading to harm. The article also highlights ethical and security risks from these 'low cost' models lacking safeguards, reinforcing the harm dimension. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Anthropic accusa DeepSeek di aver usato Claude per la sua AI

2026-02-24
Punto Informatico
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (Claude and derived models) and their development and use. The illicit large-scale use of Claude to train other AI models constitutes misuse of AI development resources and intellectual property, violating rights. The resulting models are used to circumvent censorship, which implicates violations of human rights (freedom of expression) and could harm communities by enabling authoritarian control. Additionally, the lack of inherited safety protections raises risks of cyberattacks and disinformation, indicating direct or indirect harm. These factors meet the criteria for an AI Incident, as the misuse has already occurred and has caused or is causing significant harms.
Thumbnail Image

AI條款之爭升溫,美國國防部威脅終止Anthropic軍方合約

2026-02-25
Yahoo!奇摩股市
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (Anthropic's Claude AI) and its use in military contexts, which is central to the dispute. However, the event is about a contractual and ethical disagreement and threats of contract termination, not about an AI malfunction or misuse that has caused harm. There is mention of AI involvement in a military operation, but no direct or indirect harm caused by the AI system is described. The focus is on governance, policy conflict, and potential future risks rather than an actual AI Incident or Hazard. Thus, it fits the definition of Complementary Information, as it informs about societal and governance responses and the evolving relationship between AI companies and government defense needs.
Thumbnail Image

Anthropic accuses Chinese AI labs of 'data theft' | ForkLog

2026-02-24
ForkLog
Why's our monitor labelling this an incident or hazard?
The article explicitly describes how Chinese AI labs used Anthropic's Claude in violation of terms of use to distill capabilities into rival models, which lack safety constraints and could be used for malicious purposes including cyberattacks and surveillance. This misuse directly leads to significant harms including national security risks and potential violations of human rights. The involvement of AI systems is clear, as the incident revolves around the use and misuse of LLMs. The harms are realized or ongoing, not merely potential, as the article discusses the actual unauthorized use and its consequences. Hence, this event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Claude被指控遭遇蒸馏攻击

2026-02-24
爱范儿
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (Claude and DeepSeek models) and discusses alleged misuse of AI technology through distillation attacks, which is a form of unauthorized replication or extraction of model knowledge. This constitutes a violation of intellectual property rights, a recognized harm under the AI Incident definition. Since the harm (violation) is alleged to have already occurred and involves direct misuse of AI systems, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Anthropic「蒸馏」了人类最大的知识库

2026-02-25
爱范儿
Why's our monitor labelling this an incident or hazard?
Anthropic's 'Panama Project' involved the systematic destruction and scanning of millions of books to train AI models without proper authorization, constituting a breach of copyright law and intellectual property rights. The company's use of pirated sources and large-scale unauthorized data collection directly led to legal action and a substantial settlement, evidencing actual harm to authors and publishers. The AI system's development and training process is central to this harm, fulfilling the criteria for an AI Incident under violations of intellectual property rights. The event is not merely a potential risk or complementary information but a realized incident with direct consequences.
Thumbnail Image

AI behemoths in US accuse Chinese rivals of data theft - Taipei Times

2026-02-24
Taipei Times
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (Claude chatbot and Chinese AI models) and describes the unauthorized use of AI outputs to replicate capabilities, constituting intellectual property theft, a breach of legal protections. The harm is realized as the campaigns have already taken place extensively, and the article details the scale and methods used. Additionally, the potential for misuse of these illicitly obtained models adds to the severity of the incident. Hence, this is an AI Incident rather than a hazard or complementary information, as the harm has already occurred and is significant.
Thumbnail Image

Anthropic Claims Chinese Rivals Used 16 Million Chats to Clone Claude

2026-02-24
Android Headlines
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (Claude and the cloned AI models) and describes the use of AI development techniques (distillation) in a way that allegedly violates intellectual property rights and raises national security concerns. The harms described are potential and plausible future harms, including the risk of unsafe AI models being used maliciously. Since no direct harm or incident has been reported yet, but the situation poses credible risks, the classification as an AI Hazard is appropriate. It is not Complementary Information because the article focuses on the accusation and risks rather than updates or responses. It is not an AI Incident because no realized harm has been documented in the article. It is not Unrelated because the event clearly involves AI systems and their misuse.
Thumbnail Image

Anthropic alleges large-scale distillation campaigns targeting Claude

2026-02-24
Computerworld
Why's our monitor labelling this an incident or hazard?
The event describes unauthorized use and potential intellectual property infringement involving AI systems, which is a misuse of AI technology. However, since no actual harm or incident resulting from this misuse is reported, and the focus is on the allegation and description of the misuse rather than a realized harm, this fits the definition of an AI Hazard. The event plausibly could lead to harm, such as violation of intellectual property rights or competitive harm, but these harms are not confirmed as having occurred yet.
Thumbnail Image

Anthropic accuse DeepSeek et des IA chinoises de piller son modèle Claude

2026-02-24
Génération-NT
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (Claude and other AI models) and their misuse through fraudulent means to train derivative models without authorization. The misuse directly leads to harms including security risks and potential deployment in harmful military and surveillance contexts. The presence of AI systems is clear, and the harms described (security threats, disinformation, surveillance) fall under harm to communities and critical infrastructure. The event is not merely a warning or potential risk but describes ongoing illicit activity with concrete impacts, qualifying it as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

美国AI公司Anthropic指称三家中国AI公司以蒸馏手段窃取能力

2026-02-24
美国之音
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (language models) and their misuse through distillation to steal capabilities, which is a form of intellectual property rights violation (harm category c). The misuse has already occurred at scale, with millions of interactions via fake accounts, directly leading to harm for Anthropic and potentially impacting the AI ecosystem and competition. The involvement of AI systems is clear, and the harm is realized, not just potential. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Anthropic звинувачує китайські лабораторії штучного інтелекту у крадіжці інформації з Клода

2026-02-24
InternetUA
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (Claude and competing AI models) and describes the misuse of AI technology through distillation attacks that have directly led to intellectual property theft and potential national security risks. The harm includes violation of intellectual property rights and the plausible spread of unsafe AI capabilities, which are harms under the AI Incident definition. The involvement of AI is clear, and the harms are realized or ongoing, not merely potential. Hence, this is classified as an AI Incident.
Thumbnail Image

Anthropic Files Formal Complaint Against Industrial-Scale Distillation Attacks - FinanceFeeds

2026-02-24
FinanceFeeds
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (Anthropic's Claude models and competing AI labs' models) and details the misuse of AI through distillation attacks, which is a form of AI system use leading to harm. The harms include violation of intellectual property rights (a breach of obligations under applicable law) and potential harm to communities and international stability through misuse of unprotected AI models for offensive cyber operations and disinformation. The complaint and the described security risks indicate that harm has already occurred or is ongoing, qualifying this as an AI Incident. The event also includes broader governance and security responses but the primary focus is on the realized harm from the distillation attacks.
Thumbnail Image

Anthropic Accuses Chinese AI Labs of Scraping Claude Data | 2026 Tech Rivalry

2026-02-24
るなてち
Why's our monitor labelling this an incident or hazard?
The event involves the use and misuse of AI systems (Claude and the Chinese competitors' AI models) in a way that directly leads to a violation of intellectual property rights, which is one of the defined harms (c). The unauthorized scraping and use of Claude's data for training other AI systems is a clear breach of terms and legal protections. The article also discusses the potential for increased risks such as cyberattacks and misinformation stemming from models trained on this data, reinforcing the seriousness of the incident. Since the harm (violation of intellectual property rights) is realized and directly linked to AI system misuse, this is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Anthropic: Claude faces 'industrial-scale' AI model distillation

2026-02-24
AI News
Why's our monitor labelling this an incident or hazard?
The event involves the use and misuse of AI systems (Claude and the distilled models) and directly leads to harm in the form of intellectual property violations and national security risks. The illicit distillation campaigns have already occurred and caused harm by enabling foreign actors to bypass export controls and safety measures, which is a breach of legal protections and fundamental rights related to intellectual property and security. The article details concrete evidence of these campaigns, their scale, and their impact, meeting the criteria for an AI Incident rather than a hazard or complementary information. The harm is realized and significant, not merely potential or speculative.
Thumbnail Image

Anthropic accuses Chinese AI labs of stealing from Claude

2026-02-24
Rolling Out
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (Claude and the Chinese AI models) and their misuse through unauthorized distillation, which is a form of AI system use leading to intellectual property violations and potential broader harms. The allegations describe realized unauthorized use and the associated risks, not just potential future harm. Therefore, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

AI灭霸"Anthropic与"受害股"合作推出智能体,软件板块集体回血

2026-02-24
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (Anthropic's Claude AI and its plugins) being developed and deployed for enterprise use. However, the article does not report any direct or indirect harm caused by these AI systems, nor does it suggest any plausible future harm. It mainly discusses new AI capabilities, collaborations, and market reactions, which fall under general AI ecosystem developments. Therefore, this is best classified as Complementary Information, as it provides context and updates on AI adoption and integration without describing an incident or hazard.
Thumbnail Image

Anthropic révèle une vaste opération de pillage de son modèle d'intelligence artificielle

2026-02-24
Tom’s Hardware : actualités matériels et jeux vidéo
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (Anthropic's Claude and the competing AI models) and describes a misuse of AI capabilities via fraudulent accounts to extract and replicate advanced AI functionalities without authorization. This unauthorized extraction directly violates intellectual property rights, which is a breach of obligations under applicable law protecting intellectual property. The harm is realized, not just potential, as Anthropic reports extensive data extraction and interaction volumes. The involvement of AI systems in both the misuse and the defense (detection systems) confirms the AI system's central role. Hence, this is an AI Incident involving violation of intellectual property rights due to AI misuse.
Thumbnail Image

奇客Solidot | Anthropic 指控三家中国 AI 公司蒸馏数据训练模型

2026-02-25
Lighthouse @ Newquay
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (chatbots and AI models) and concerns the use of data derived from one AI system to train others without authorization. This constitutes a violation of intellectual property rights and contractual terms, which falls under harm category (c) "Violations of human rights or a breach of obligations under the applicable law intended to protect fundamental, labor, and intellectual property rights." Since the alleged unauthorized data use has already occurred and is central to the event, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Chinese AI Firms Hit Claude with Distillation Attacks, Anthropic Warns

2026-02-24
Infosecurity Magazine
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use and misuse of an AI system (Claude LLM) by malicious actors to extract proprietary AI capabilities illicitly. The misuse is linked to potential harms such as enabling offensive cyber operations, disinformation campaigns, and mass surveillance, which constitute violations of human rights and harm to communities. Although the harms are currently potential, the scale and nature of the attacks and the warnings from Anthropic indicate a credible and significant risk of harm. The event also involves violation of terms of service and unauthorized use, which is a breach of legal and ethical obligations. Given the direct involvement of AI misuse leading to or plausibly leading to significant harms, this event qualifies as an AI Incident.
Thumbnail Image

马斯克抨击Anthropic盗用训练数据 不争的事实

2026-02-24
中华网科技公司
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems and their development, focusing on the alleged unauthorized use of training data by Anthropic. This constitutes a violation of intellectual property rights, which is a recognized form of harm under the AI Incident framework. The accusation of paying billions in settlements further supports the occurrence of harm. Therefore, this event qualifies as an AI Incident due to the direct link between AI system development and violation of intellectual property rights.
Thumbnail Image

美防長對Anthropic下通牒:不開放AI就滾 不排除動用《國防生產法》 | 國際焦點 | 國際 | 經濟日報

2026-02-25
Udnemoney聯合理財網
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Anthropic's AI models) and its use in military contexts. The Department of Defense's threat to forcibly use the AI technology under the Defense Production Act indicates a potential for misuse or deployment without the company's ethical safeguards. While the AI has been used in military operations, the article does not report any specific harm or incident resulting from this use. The main focus is on the potential for harm due to forced military use and ethical concerns, which could plausibly lead to harms such as autonomous weapons deployment or surveillance abuses. Thus, the event fits the definition of an AI Hazard rather than an AI Incident. It is not Complementary Information because it is not an update or response to a past incident, nor is it unrelated as it clearly involves AI and potential harm.
Thumbnail Image

"AI灭霸"Anthropic与"受害股"合作推出智能体 软

2026-02-24
东方财富网
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (Anthropic's Claude AI and its plugins) being developed and deployed for enterprise use. However, the article does not report any injury, rights violations, disruption, or other harms caused by these AI systems. The mention of stock market reactions is economic and market-related but does not constitute harm as defined in the framework. The article primarily provides information about new AI capabilities, partnerships, and market responses, which fits the definition of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

Anthropic Says DeepSeek, Moonshot, and MiniMax Targeted Claude | eWEEK

2026-02-24
eWEEK
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Claude) and describes unauthorized use of AI techniques (distillation) to replicate its capabilities at scale, which constitutes a violation of intellectual property rights. This unauthorized extraction is a direct misuse of the AI system leading to harm (violation of rights and potential economic harm). Therefore, it qualifies as an AI Incident. The article also discusses responses and mitigation measures, but the primary focus is on the realized unauthorized extraction harm, not just complementary information or potential future harm.
Thumbnail Image

AI对手发展太快,Anthropic放弃重要安全承诺

2026-02-25
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
Anthropic's decision to relax its flagship safety policy directly involves the development and use of AI systems. The policy previously prevented training AI models without sufficient safety guarantees, which is a direct safety measure to prevent harm. By abandoning this commitment, the company increases the plausible risk of AI incidents causing harm, including catastrophic risks. The article does not report any realized harm yet but highlights a credible future risk due to reduced safety constraints and competitive pressures. Therefore, this event qualifies as an AI Hazard because it plausibly could lead to an AI Incident involving harm to people or communities if safety is compromised.
Thumbnail Image

Anthropic acusa rivais chinesas de roubo massivo de dados do seu chatbot Claude

2026-02-24
Pplware
Why's our monitor labelling this an incident or hazard?
The event involves the use and misuse of AI systems, specifically the unauthorized extraction of AI capabilities from Anthropic's chatbot Claude by other companies using AI techniques. This constitutes a violation of intellectual property rights, which is a breach of obligations under applicable law protecting intellectual property rights, thus fitting the definition of an AI Incident. Additionally, the event highlights risks to national security due to the potential misuse of these derived AI models lacking safety safeguards, reinforcing the harm dimension. Therefore, this is classified as an AI Incident.
Thumbnail Image

Anthropic acusa empresas chinesas de roubo e manipulação para favorecerem os seus modelos de IA - Tek Notícias

2026-02-24
SAPO Tek
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (Claude and other AI models) and describes the unauthorized use and manipulation of these AI systems by other companies to train their own models. This misuse directly violates intellectual property rights, which is a recognized harm under the AI Incident definition (c). Furthermore, the article highlights the risks of uncontrolled AI development leading to security threats, reinforcing the seriousness of the harm. Since the harm is realized (illegal use and IP violation) and the AI system's misuse is central to the event, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Top AI Firm Says Chinese Labs Stole U.S. Tech Using 24,000 Fake Accounts

2026-02-25
SGT Report
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Anthropic's Claude chatbot) being exploited through fraudulent accounts to steal technology, which is a misuse of the AI system's outputs. The harm includes intellectual property theft (a breach of intellectual property rights) and the plausible use of stolen AI capabilities for military and surveillance systems, which can lead to violations of human rights. The involvement of U.S. government investigations and the direct link to harm through unauthorized use and potential misuse confirms this as an AI Incident rather than a hazard or complementary information. The harm is realized or ongoing, not merely potential.
Thumbnail Image

Anthropic指控DeepSeek月之暗面MiniMax 藉「蒸餾」提取Claude模型輸出結果 - 20260225 - 經濟

2026-02-24
明報新聞網 - 即時新聞 instant news
Why's our monitor labelling this an incident or hazard?
The event involves the use and misuse of an AI system (Claude) outputs by other AI companies to train their own models without authorization, which constitutes a violation of intellectual property rights and service terms. This unauthorized extraction and use of AI outputs can be considered a breach of obligations intended to protect intellectual property rights, thus meeting the criteria for an AI Incident. The harm is indirect but materialized, as the unauthorized use undermines the rights of the original AI developer and could impact the AI ecosystem's integrity. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Top Tech News Today, February 24, 2026 - Tech Startups

2026-02-24
Tech News | Startups News
Why's our monitor labelling this an incident or hazard?
The article discusses multiple AI-related topics, including realized harms (e.g., AI-assisted cyberattacks, security breaches) and potential risks (e.g., agent population explosion, supply-chain attacks), but these are presented as part of a broader news roundup rather than a detailed report on a single incident or hazard. The harms mentioned are indirect and part of ongoing trends rather than a new, specific AI Incident. Similarly, the potential risks are general and not tied to a particular event that could plausibly lead to harm imminently. The article also covers governance, market dynamics, and strategic shifts, which align with the definition of Complementary Information. Hence, the classification as Complementary Information is appropriate.
Thumbnail Image

Anthropic accuses DeepSeek, Moonshot AI, and MiniMax of coordinated 'distillation attack' on Claude - Tech Startups

2026-02-24
Tech News | Startups News
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (Claude and the accused companies' models) and describes a coordinated misuse of AI (distillation attack) that has directly led to harm in the form of intellectual property rights violations and potential national security threats. The large-scale unauthorized extraction of AI capabilities constitutes a breach of obligations protecting intellectual property rights, fulfilling the criteria for an AI Incident. The harm is realized, not merely potential, as Anthropic reports millions of illicit interactions and capability extraction. Hence, the classification as AI Incident is appropriate.
Thumbnail Image

Anthropic acusa gigantes chinesas de roubar tecnologia através de contas falsas | TugaTech

2026-02-24
TugaTech
Why's our monitor labelling this an incident or hazard?
The event involves the use and misuse of an AI system (Claude) through fraudulent accounts to illicitly extract advanced AI capabilities, constituting a violation of intellectual property rights. This fits the definition of an AI Incident because the development and use of the AI system have directly led to a breach of obligations under applicable law protecting intellectual property rights. The large-scale unauthorized access and copying of AI capabilities represent a clear harm. Although mitigation efforts are underway, the incident is ongoing and significant.
Thumbnail Image

Anthropic accuse des entreprises chinoises d'avoir copié illicitement les capacités de Claude

2026-02-25
Fredzone
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (large language models) and describes a misuse scenario (unauthorized extraction of model capabilities) that could plausibly lead to harm such as intellectual property rights violations and competitive disadvantage. However, no direct or indirect harm has been confirmed or realized yet, and no legal infractions have been formally alleged beyond violation of terms of use. This fits the definition of an AI Hazard, as the event plausibly could lead to an AI Incident if the unauthorized use results in significant harm or breaches. It is not Complementary Information because it reports a new allegation and situation, not a response or update to a prior incident. It is not unrelated because AI systems and their misuse are central to the event.
Thumbnail Image

Anthropic Alleges Chinese AI Labs Quietly Reverse-Engineered Claude to Build Their Own Models

2026-02-24
WebProNews
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (large language models) and their outputs being used without authorization to train competing models, which is a violation of intellectual property rights and trade secret protections. The harm is realized as Anthropic alleges systematic unauthorized use of its model outputs, which is a breach of legal and ethical obligations. This fits the definition of an AI Incident because the development and use of AI systems have directly led to a breach of intellectual property rights. The geopolitical and legal complexities do not negate the fact that harm has occurred. Hence, the classification as AI Incident is appropriate.
Thumbnail Image

Anthropic accusa Deepseek: "Sfrutta Claude per addestrare l'AI"

2026-02-24
Key4biz
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (Claude and other AI models) and their misuse in training other AI systems without authorization, which is a breach of intellectual property rights. The creation of fraudulent accounts and systematic extraction of model outputs for unauthorized training is a direct misuse of AI development and use. This meets the criteria for an AI Incident under violations of intellectual property rights. The article also mentions broader implications and responses but the core event is the realized harm from unauthorized use of AI technology.
Thumbnail Image

3家中企被控盜採美AI模型能力| 台灣大紀元

2026-02-24
大紀元時報 - 台灣(The Epoch Times - Taiwan)
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (large language models like Claude and derivative models by Chinese companies) and describes their misuse through unauthorized distillation attacks. While no direct harm has yet occurred, the potential for significant harm is clearly articulated, including risks to national security, cyberattacks, misinformation, and surveillance. The event is about the use and misuse of AI systems that could plausibly lead to serious harms, fitting the definition of an AI Hazard. It is not an AI Incident because the harms are potential rather than realized. It is not Complementary Information because the main focus is on the misuse and its risks, not on responses or updates. It is not Unrelated because AI systems and their misuse are central to the event.
Thumbnail Image

Anthropic alleges massive data extraction by Chinese AI companies

2026-02-24
The American Bazaar
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems: Anthropic's Claude chatbot and the AI models of the accused companies. The misuse (use of Claude's outputs to train other models) is a direct use of the AI system's outputs in a way that breaches terms of service and likely intellectual property rights. This misuse has already happened at scale, causing harm to Anthropic and raising legal and ethical concerns. Therefore, this qualifies as an AI Incident due to violation of intellectual property rights and misuse of AI system outputs leading to harm to the original AI developer's property and competitive position.
Thumbnail Image

Китайські ШІ-компанії навчали власні моделі на Claude без згоди Anthropic

2026-02-24
LIGA
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (Claude) in a way that directly leads to a violation of intellectual property rights, as the Chinese companies used Claude's outputs without consent to train their own models. This unauthorized use and large-scale data extraction represent a breach of legal protections and rights associated with the AI system's development and deployment. Therefore, this qualifies as an AI Incident due to the realized harm of rights violation.
Thumbnail Image

Anthropic says Chinese AI firms, including DeepSeek, 'distilled' Claude to improve their own models - The Tech Portal

2026-02-24
The Tech Portal
Why's our monitor labelling this an incident or hazard?
The event involves the use and misuse of AI systems, specifically the illicit extraction of capabilities from Anthropic's Claude model by Chinese AI firms using fraudulent means. This misuse directly leads to violations of intellectual property rights and breaches of terms of service, which are recognized harms under the AI Incident definition. The large-scale, coordinated nature of the campaigns and the use of deceptive methods to evade detection further support classification as an AI Incident rather than a mere hazard or complementary information. Therefore, this event meets the criteria for an AI Incident due to the realized harm of intellectual property violation and unfair competitive advantage.
Thumbnail Image

美國AI公司 Anthropic 指控中國同業竊取模型能力,籲政府介入 | yam News

2026-02-24
蕃新聞
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (Claude chatbot and Chinese AI models) and describes the misuse of AI technology through unauthorized large-scale querying to replicate model capabilities. This misuse directly relates to violations of intellectual property rights and the circumvention of safety measures embedded in the original AI system, which can be considered a breach of obligations under applicable law and a potential harm to safety and security. Although no physical harm is reported, the incident involves realized harm in terms of intellectual property theft and potential security risks, fitting the definition of an AI Incident.
Thumbnail Image

Anthropic accuse DeepSeek et d'autres laboratoires chinois d'IA d'utiliser Claude pour extraire illicitement les capacités de Claude afin d'améliorer leurs propres modèles, à l'aide de 24 000 comptes frauduleux

2026-02-24
Developpez.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use and misuse of an AI system (Claude) by other entities to extract capabilities illicitly, which directly leads to harms such as violation of intellectual property rights and risks to security and societal harms through unsafe AI proliferation. The large-scale fraudulent use of AI and the potential for these distilled models to be used in harmful ways meets the criteria for an AI Incident. The event is not merely a potential risk but describes ongoing unauthorized use and its consequences, thus qualifying as an incident rather than a hazard or complementary information.
Thumbnail Image

Chinese AI companies 'distilled' Claude to improve own models, Anthropic says

2026-02-24
ansarpress.com
Why's our monitor labelling this an incident or hazard?
The event involves the use and misuse of an AI system (Claude) by other companies to illicitly train their own models through distillation, which is a form of AI system use leading to potential harm. The lack of safeguards in the distilled models and the risk of their open-source release create a credible threat to national security, a form of harm to communities and possibly a breach of legal or regulatory obligations. Since the harm is not yet realized but plausibly could occur, this fits the definition of an AI Hazard rather than an AI Incident. The event is not merely complementary information because it reports an active misuse with significant risk, nor is it unrelated as it clearly involves AI systems and their misuse.
Thumbnail Image

贼喊捉贼?马斯克回应Anthropic指责中国AI公司窃密 - CNMO科技

2026-02-24
ai.cnmo.com
Why's our monitor labelling this an incident or hazard?
Anthropic's accusation involves the use of an AI system (Claude) and the alleged misuse of its outputs by other AI companies to train their own models without authorization. This directly relates to a breach of intellectual property rights, which is a form of harm under the AI Incident definition (c). The event describes realized harm (data theft and unauthorized use) rather than a potential or future risk. Therefore, this qualifies as an AI Incident. Elon Musk's commentary does not change the nature of the event but provides context.
Thumbnail Image

智通财经APP获悉,据知情人士透露,如果Anthropic未能在周五前遵守政府条款,五角大楼威胁将援引一项冷战时期法律,强制该人工智能(AI)初创公司允许美军使用其技术。知情人士表示,在周二Anthropic首席执行官达里奥・阿莫迪(Dario Amodei)与美国国防部长海格塞斯(Pete Hegseth)的会面中,美方官员列出了一系列后果,包括威胁将Anthropic认定为供应链风险,并援引《国防生产法》(Defense Production Act),即便公司不同意,也将动用其AI软件。这一最后通牒标志着美国国防部与这家AI初创公司之间不断升级的争端。争议焦点在于,Anthropic坚持为其Claude AI工具设置使用护栏,而军方认为这些限制没有必要。如果五角大楼真的采取行动,将危及Anthropic为军方承接的高达2亿美元的项目合同。据其中一位知情人士称,在会谈中,阿莫迪阐述了Anthropic的条件:美军不得使用其产品进行对敌方作战人员的自主打击,也不得对美国公民进行大规模监控。该人士表示,阿莫迪强调,这些情景尚未在实际行动中发生。Anthropic在会后声明中表示:"我们就使用政策持续进行了善意沟通,以确保Anthropic能够在符合模型可靠且负责任能力范围内,继续支持政府的国家安全使命。"根据最新一轮融资估值,Anthropic目前估值约为3800亿美元。该公司是首家获准在美国政府内部处理机密材料的AI企业,其Claude Gov工具因操作简便迅速成为五角大楼官员青睐的选择之一。然而,在国家安全领域,它正面临来自埃隆・马斯克旗下xAI(刚刚获得机密业务许可)以及OpenAI和谷歌Gemini等对手日益激烈的竞争。双方的这场争端爆发于五角大楼发布新AI战略数周之后。该战略呼吁通过增加对前沿模型的实验和减少使用方面的官僚障碍,使军队成为"AI优先"的力量。该战略特别敦促国防部选择"没有使用政策限制、不会妨碍合法军事应用"的模型。一位美国官员表示,五角大楼在得知Anthropic曾对今年1月初强行抓捕委内瑞拉总统马杜罗的特种部队行动中AI使用情况提出疑问后,对其是否支持美国目标产生担忧。Anthropic则对五角大楼关于其质疑抓捕行动的说法提出不同解读。Anthropic周一通过发言人表示:"Anthropic未曾与国防部讨论Claude在具体行动中的使用。我们也未与任何行业合作伙伴讨论此事,或表达担忧,除非是在严格技术层面的常规交流中。"Anthropic将自身定位为一家专注于AI负责任使用的公司,目标是避免该技术带来灾难性后果。该公司专门为美国国家安全目的打造了Claude Gov,并力求在自身伦理边界内为政府客户提供服务。针对Anthropic对其技术可能被用于大规模监控和自主打击的担忧,五角大楼官员坚持表示,国防部遵循法律,并始终有人类参与决策。如果五角大楼将Anthropic认定为供应链风险,其产品将被其他军方供应商禁止使用。这些公司随后必须核实其未使用Anthropic的产品。此外,根据1950年的《国防生产法》,政府可以基于国家安全理由,强制美国公司提供所需产品或服务。历任总统曾利用该法保障能源供应,包括在20世纪60年代强制翻修油轮,以及在70年代将已签约的石油转供军方。

2026-02-24
证券之星
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Anthropic's Claude AI) and concerns its development and use. The dispute centers on the potential forced use of this AI system by the military under a legal mandate, which could plausibly lead to harms including autonomous lethal actions and mass surveillance. Although these harms have not yet materialized, the credible threat of compelled use without ethical restrictions constitutes a plausible risk of significant harm. There is no indication that harm has already occurred, so it is not an AI Incident. The article is not merely complementary information or unrelated, as it focuses on a credible threat of harm stemming from AI use. Hence, the classification is AI Hazard.
Thumbnail Image

在法律插件引发市场暴跌几周后 Anthropic推出新AI工具大肆宣传

2026-02-24
新浪财经
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (Anthropic's Claude AI and its plugins) and their use in business contexts. It references a past AI Incident where a legal AI plugin indirectly caused significant financial market harm. However, the current event is about the launch and promotion of new AI tools without any new harm or plausible future harm described. Therefore, this article serves as Complementary Information, providing context and updates related to a prior AI Incident and the evolving AI ecosystem, rather than reporting a new AI Incident or AI Hazard.
Thumbnail Image

Anthropic推出智能体AI工具 针对实现投资银行和人力资源自动化

2026-02-24
新浪财经
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (Claude chatbot and AI tools for automation). However, there is no indication that these tools have caused any injury, rights violations, disruption, or other harms yet. The article does not describe any incident or credible risk of harm arising from these AI tools at this stage. Therefore, it does not meet the criteria for AI Incident or AI Hazard. It is best classified as Complementary Information since it provides context on AI development and deployment in new sectors, which is relevant to understanding the AI ecosystem but does not report harm or credible risk of harm.
Thumbnail Image

AI灭霸"Anthropic与"受害股"合作推出智能体 软件板块集体回血

2026-02-24
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (Anthropic's Claude AI and its plugins) being developed and deployed for enterprise use. However, the article does not report any incident of harm, violation of rights, or disruption caused by these AI systems. It also does not suggest any credible risk of future harm. The content is primarily about new AI product offerings, collaborations, and market responses, which fits the definition of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

中美AI之间的蒸馏,要撕破了 |【经纬低调分享】

2026-02-25
新浪财经
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (large language models) and their use in a manner that allegedly breaches contractual terms and intellectual property rights, which are recognized harms under the AI Incident definition (violations of human rights or breach of obligations under applicable law, including intellectual property rights). The use of AI for large-scale capability extraction through coordinated API calls and evasion of access controls directly leads to these harms. The article details the nature of the AI system involvement, the use and misuse of AI outputs, and the resulting legal and competitive harms. Although there is debate about the legality and ethical boundaries, the event describes actual realized harm and not just potential risk. Hence, it is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Anthropic指控三家AI公司大规模"蒸馏"其Claude模型

2026-02-24
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (Anthropic's Claude model) and describes malicious use of AI techniques (fake accounts, distributed querying) to extract proprietary AI capabilities, which constitutes a breach of intellectual property rights. While no direct harm has yet occurred, the report warns of plausible future harms including unethical use of distilled models in dangerous applications. The event does not describe realized harm but highlights a credible risk of significant harm, fitting the definition of an AI Hazard rather than an AI Incident. It is not merely complementary information since it reports a new event with potential harm, nor is it unrelated.
Thumbnail Image

Anthropic acusa empresas chinesas de destilação ilícita do Claude | SempreUpdate

2026-02-24
SempreUpdate
Why's our monitor labelling this an incident or hazard?
The event involves the use and misuse of an AI system (Claude) leading to a violation of intellectual property rights, which is a breach of obligations under applicable law protecting intellectual property. The unauthorized distillation and replication of the AI model's capabilities directly cause harm to the rights holders and pose risks to security and ethical standards. Therefore, this qualifies as an AI Incident due to realized harm related to intellectual property violations and potential broader harms linked to security and ethical concerns.
Thumbnail Image

新浪人工智能热点小时报丨2026年02月25日05时_今日实时人工智能热点速递

2026-02-24
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article does not describe any realized harm caused by AI systems (no AI Incident). It also does not describe a plausible future harm from AI system development or use that would qualify as an AI Hazard. Instead, it focuses on governance discussions (Anthropic's refusal to loosen military use restrictions), strategic AI infrastructure plans (SpaceX's satellite AI data center), and legal rulings (xAI vs OpenAI). These are updates that enhance understanding of the AI ecosystem and governance responses, fitting the definition of Complementary Information.
Thumbnail Image

Anthropic звинувачує ШІ-компанії в зловживанні Claude

2026-02-24
HiTech.Expert
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (Claude and other AI models) and describes misuse of AI capabilities through 'distillation attacks' by other companies. While this misuse could plausibly lead to harms such as security breaches, unfair competitive advantage, or circumvention of safety measures, no direct harm or incident is reported as having occurred. The focus is on the potential for harm and ongoing misuse, with Anthropic working on mitigation. Thus, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Anthropic Accuses China AI Firms of Model Mining

2026-02-24
DataBreachToday
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (Anthropic's Claude models and the Chinese firms' AI models trained via distillation). The misuse of these AI systems through unauthorized access and extraction of proprietary capabilities directly leads to violations of intellectual property rights, which is a breach of obligations under applicable law protecting intellectual property rights. Additionally, the potential for these stolen capabilities to be used in military or surveillance systems poses a significant harm. The event describes ongoing and realized unauthorized use and harm, not just potential future harm. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Anthropic accuse DeepSeek, MiniMax et Moonshot d'avoir piraté Claude via 24 000 comptes frauduleux

2026-02-24
Le Jour Guinée, actualités des banques en ligne
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (Claude and the competing AI models) and describes the misuse of AI via fraudulent accounts to extract model capabilities illicitly. This misuse has directly led to commercial harm (theft of intellectual property and competitive advantage) and potential harm to national security through the creation of unsafe AI models that could be used maliciously. The involvement of AI systems in the development, use, and misuse is clear, and the harms described fit within the definitions of AI Incident, including violation of intellectual property rights and potential harm to communities and security. Therefore, this event is classified as an AI Incident.
Thumbnail Image

Chinese AI labs accused by Anthropic of mining Claude

2026-02-25
SC Media
Why's our monitor labelling this an incident or hazard?
The event involves the misuse of an AI system (Claude) through fraudulent means to extract its capabilities, which could plausibly lead to significant harms such as cyberattacks, disinformation campaigns, and mass surveillance. Since the harms are described as potential and the article focuses on the risk and advocacy for controls rather than confirmed realized harm, this fits the definition of an AI Hazard rather than an AI Incident. The involvement of AI is explicit, and the potential harms are clearly articulated and plausible based on the misuse described.
Thumbnail Image

Chinese AI Firms Replicate Model Using 16 Million Claude Queries, Says Anthropic

2026-02-24
El-Balad.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use and misuse of an AI system (Claude) by unauthorized actors to illegally extract model capabilities, which is a direct violation of terms and regional laws. The illicit distillation leads to models without safeguards, raising national security risks and potential harm to communities and countries. The harm is realized in the form of illegal activity and potential misuse of AI capabilities, fulfilling the criteria for an AI Incident. The company's mitigation efforts are complementary but do not negate the incident classification.
Thumbnail Image

Anthropic accuses 3 Chinese firms of data theft

2026-02-25
DT Next
Why's our monitor labelling this an incident or hazard?
Anthropic's accusation that three Chinese firms used fraudulent accounts to harvest large amounts of data from its AI chatbot to train their own AI systems indicates a breach of intellectual property rights and contractual terms. The use of AI-generated data without authorization for training other AI systems directly violates legal and ethical frameworks protecting intellectual property. Therefore, this event qualifies as an AI Incident due to the violation of intellectual property rights caused by the misuse of AI system outputs.
Thumbnail Image

Why did Anthropic accuse three Chinese AI labs?

2026-02-24
AllToc
Why's our monitor labelling this an incident or hazard?
The event involves AI systems and their use, specifically the alleged misuse of AI outputs for model distillation at scale. This constitutes a potential violation of intellectual property rights and unfair competitive behavior, which fits within the scope of AI-related harms. However, since the article does not confirm that any actual harm has occurred or that legal or regulatory actions have been taken, and the harm remains a plausible risk rather than realized, the event is best classified as an AI Hazard. It signals a credible risk of harm through unauthorized data siphoning and model replication but does not document a direct or indirect AI Incident at this stage.
Thumbnail Image

AI对手发展太快 Anthropic放弃重要安全承诺 - cnBeta.COM 移动版

2026-02-25
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
Anthropic's original policy was a safety measure to prevent training AI models without adequate risk mitigation, which is directly related to the development and use of AI systems. The abandonment of this commitment reduces safety oversight and increases the plausible risk of harmful AI incidents in the future. Although no actual harm has been reported yet, the event clearly indicates a credible risk of future harm stemming from AI development practices. This fits the definition of an AI Hazard, as it plausibly could lead to an AI Incident. There is no indication of realized harm or incident, so it is not an AI Incident. The event is more than just complementary information because it reveals a significant policy shift with safety implications, not merely an update or response to past incidents.
Thumbnail Image

1600万次可以蒸馏 AI 模型吗?ChatGPT回答:不足以创造我这种级别的_手机网易网

2026-02-24
m.163.com
Why's our monitor labelling this an incident or hazard?
The content centers on the development and training methodologies of AI models, specifically the role of distillation and supervised fine-tuning with a given dataset size. There is no mention or implication of injury, rights violations, infrastructure disruption, or other harms caused or plausibly caused by these AI systems. The article is an analysis or explanation of AI capabilities and training data sufficiency, which fits the definition of Complementary Information as it provides context and understanding about AI development without describing an incident or hazard.
Thumbnail Image

What did Anthropic accuse Chinese labs of?

2026-02-24
AllToc
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (Claude chatbot) and the alleged misuse of its outputs by other AI labs to train their own models. This constitutes a violation of intellectual property rights and contractual obligations, which falls under harm category (c) in the AI Incident definition. Although no formal legal or regulatory actions have been reported yet, the described large-scale unauthorized use of AI outputs causing harm to the original developer's rights qualifies this as an AI Incident due to realized harm through intellectual property violation and unfair competitive practices.
Thumbnail Image

Anthropic alega que "tigres da IA" estão extraindo recursos ilegalmente | CNN Brasil

2026-02-24
CNN Brasil
Why's our monitor labelling this an incident or hazard?
The article involves AI systems explicitly (proprietary AI models and their distillation). The event concerns the use (misuse) of AI systems by Chinese labs to create unauthorized models, which could plausibly lead to harms such as cyberattacks, disinformation, and surveillance abuses. No direct harm is reported as having occurred yet, but the potential for significant harm is clearly articulated and linked to the AI systems' misuse. Therefore, this qualifies as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Anthropic звинуватила DeepSeek та інші китайські ШІ-сервіси у незаконному вилученні можливостей Claude

2026-02-23
Межа
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (Anthropic's Claude and the Chinese AI models) and describes the unauthorized use of AI outputs to train other AI systems, which is a breach of intellectual property rights. This misuse has already occurred, constituting realized harm. Additionally, the potential integration of these capabilities into military and surveillance systems implies further risks. Hence, the event qualifies as an AI Incident due to direct involvement of AI systems in causing violations of rights and potential broader harms.
Thumbnail Image

0

2026-02-24
developpez.net
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (Claude and derivative models) and describes their misuse through unauthorized distillation. The misuse is linked to plausible future harms including security risks, misuse in military and surveillance contexts, and disinformation, which are significant harms under the framework. Since the harms are not described as having already occurred but are credible and potentially severe, this fits the definition of an AI Hazard rather than an AI Incident. The report also calls for coordinated sector and policy responses, reinforcing the hazard nature. Therefore, the classification is AI Hazard.
Thumbnail Image

Anthropic指控DeepSeek等中国AI大模型抄袭 遭马斯克贴脸开骂:贼喊捉贼 大规模窃秘数据_手机网易网

2026-02-24
m.163.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (large language models) and describes the misuse of AI outputs through distillation attacks to replicate proprietary model capabilities. This misuse is alleged to be unauthorized and in violation of service terms and intellectual property rights, which fits the definition of harm (c) under AI Incidents. The harm is realized (not just potential), as Anthropic claims large-scale unauthorized interactions and data extraction have already occurred. Hence, the event is best classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

DeepSeek非法获取美AI模型能力 专家析危害

2026-02-24
botanwang.com
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (Claude, ChatGPT) and their unauthorized use via large-scale distillation attacks by Chinese companies. This misuse directly leads to harms including violation of intellectual property rights, threats to national security, and potential harm to communities through misuse in cyber warfare and misinformation. The AI systems' development and use are central to the incident, and the harms are realized or ongoing as per the article. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

2026-02-24
next.ink
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (generative AI models) and concerns their development and use. The accusations relate to unauthorized use of AI-generated data to train competing models, which could constitute a violation of intellectual property rights, a recognized AI harm category. However, the article only reports accusations and ongoing disputes without confirmed or realized harm or legal rulings. Thus, it fits the definition of an AI Hazard, as the development and use of AI systems here could plausibly lead to an AI Incident involving rights violations or unfair competitive harm, but no direct or indirect harm has yet materialized according to the article.
Thumbnail Image

Anthropic accuses Chinese firms of distillation attacks

2026-02-24
semafor.com
Why's our monitor labelling this an incident or hazard?
The event involves the use and misuse of AI systems (distillation attacks on AI models) and highlights the potential for these illicitly trained models to be used in harmful ways, including weapons development and cybercrime. Since the harm is not reported as realized but is plausibly foreseeable and significant, this qualifies as an AI Hazard rather than an AI Incident. The accusation and the nature of the activity indicate a credible risk of future harm stemming from the AI systems' misuse.
Thumbnail Image

Anthropic beschuldigt chinesische KI-Labore des Datenklaus

2026-02-24
IT BOLTWISE® x Artificial Intelligence
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (Claude models) and their unauthorized use through model distillation, which is a method related to AI model training and knowledge extraction. The large-scale unauthorized interactions and data extraction constitute a breach of intellectual property rights, a recognized form of harm under the AI Incident definition. The potential consequences mentioned (cyberattacks, disinformation, mass surveillance) further underscore the harm linked to the AI system's misuse. Since the harm is realized (data theft and violation of usage terms) and the AI system's role is pivotal, this is classified as an AI Incident.
Thumbnail Image

What are Anthropic's accusations against Chinese AI labs?

2026-02-24
AllToc
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (Claude chatbot and competing AI models) and concerns the unauthorized use of AI-generated content to train other AI systems. This implicates potential violations of intellectual property rights, which falls under harm category (c) in the AI Incident definition. Although the legal and technical standards are not yet settled, the described activity has already occurred and involves direct misuse of AI outputs, constituting an AI Incident due to the violation of rights and illicit appropriation of AI-generated content.
Thumbnail Image

Escándalo en la IA: Anthropic acusa a tres empresas chinas de "robar" capacidades de Claude mediante 24.000 cuentas falsas

2026-02-24
Computer Hoy
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (Anthropic's Claude and the alleged copied AI models) and concerns the use and development of AI. The alleged large-scale copying via fake accounts is a misuse of AI systems that could plausibly lead to violations of intellectual property rights and undermine AI safety efforts, which fits the definition of an AI Hazard. Since no actual harm or legal ruling is reported, and the harm is potential rather than realized, it does not qualify as an AI Incident. It is not merely complementary information because the main focus is on the alleged misuse and its implications, not on responses or ecosystem updates. Therefore, the classification is AI Hazard.
Thumbnail Image

Anthropic Criticizes China's AI Labs Efforts

2026-02-24
El-Balad.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (Claude AI models) and the unauthorized use of these systems through model distillation by other AI labs. The unauthorized extraction and use of AI model knowledge directly violates intellectual property rights, which is a recognized harm under the AI Incident definition. Additionally, the concerns about national security threats such as cyberattacks and disinformation campaigns further support the classification as an AI Incident. The event describes actual unauthorized use and harm rather than potential or future risks, so it is not merely a hazard or complementary information. Hence, the classification as AI Incident is appropriate.
Thumbnail Image

Anthropic AI Claims That It Has Identified 'Industrial-Scale Distillation Attacks' By Chinese AI Company DeepSeek

2026-02-24
ETV Bharat News
Why's our monitor labelling this an incident or hazard?
The event involves the use and misuse of AI systems, specifically the illicit distillation of AI models by foreign entities. This constitutes a violation of intellectual property rights and raises the risk of harm through the potential deployment of these extracted capabilities in military and surveillance contexts. Although no direct harm is reported as having occurred yet, the described activity plausibly leads to significant harms, including breaches of legal protections and risks to security. Therefore, this event qualifies as an AI Hazard due to the credible risk of future harm stemming from the misuse of AI model capabilities.
Thumbnail Image

Anthropic Reports Safety Breach Affecting Claude AI Models. Is Crypto at Risk?

2026-02-24
DailyCoin
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Claude language models) and details a security breach where unauthorized use and replication attempts occurred. While no direct harm (such as injury or operational disruption) is reported, the breach undermines safety protections and could plausibly lead to significant harms, including misuse in financial decision-making and cryptocurrency operations. This fits the definition of an AI Hazard, as the event plausibly could lead to an AI Incident in the future. The company's response and calls for coordinated industry and regulatory action further support the classification as a hazard rather than an incident or complementary information.
Thumbnail Image

Anthropic Reveals Chinese Firms' Attempts to Steal LLM Technology

2026-02-24
El-Balad.com
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (large language models) and describes misuse (distillation attacks) that breach terms of service and intellectual property protections. While the misuse is ongoing and involves large-scale unauthorized extraction of AI capabilities, the article does not confirm that these actions have resulted in direct or indirect harm such as legal violations or operational disruptions. The focus is on the potential threat and the need for coordinated defense, indicating a plausible future harm scenario rather than a realized incident. Hence, this qualifies as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Anthropic beschuldigt chinesische Firmen der unlauteren KI-Destillation

2026-02-24
IT BOLTWISE® x Artificial Intelligence
Why's our monitor labelling this an incident or hazard?
The article involves AI systems explicitly (Claude and other AI models) and discusses the use and misuse of AI model distillation techniques. The allegations point to potential misuse that could lead to significant harms, including national security risks, but no actual harm or incident has been reported as having occurred. Therefore, this situation represents a plausible future risk of harm stemming from AI system misuse, fitting the definition of an AI Hazard rather than an AI Incident. It is not merely complementary information because the main focus is on the potential risks and misuse rather than responses or ecosystem updates.
Thumbnail Image

还好意思说别人蒸馏?马斯克抨击Anthropic大规模盗用训练数据_手机网易网

2026-02-24
m.163.com
Why's our monitor labelling this an incident or hazard?
The article involves AI systems (Claude chatbot and others) and discusses the development and use of AI training data, including alleged unauthorized data use and theft. These relate to violations of intellectual property rights, which qualify as harms under AI Incident definitions. However, the article does not report a concrete incident where these harms have been legally or officially established or where direct harm has been realized and documented. Instead, it focuses on accusations, public disputes, and calls for coordinated industry and policy responses. This aligns with the definition of Complementary Information, which includes updates on societal, technical, or governance responses and ongoing debates about AI harms, rather than a new AI Incident or Hazard. Hence, the classification is Complementary Information.
Thumbnail Image

xAI与五角大楼达成协议 Grok将进入美军机密系统 - cnBeta.COM 移动版

2026-02-24
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (large language models) in classified military and intelligence systems, which are critical infrastructure and sensitive environments. The agreement allows broad usage rights, including potentially for autonomous weapons and mass surveillance, which raises credible risks of harm such as violations of human rights and other significant harms. No actual harm is reported yet, but the plausible future harm is significant given the context and the nature of the AI deployment. Hence, it is classified as an AI Hazard rather than an AI Incident. The article does not focus on responses or updates to past incidents, so it is not Complementary Information, nor is it unrelated as it clearly involves AI systems and potential harm.
Thumbnail Image

Did Anthropic accuse Chinese AI labs?

2026-02-24
AllToc
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (Claude models) and their outputs being used by other AI labs to train downstream models without authorization. This constitutes a violation of intellectual property rights, which is a breach of obligations under applicable law protecting intellectual property rights. Since the alleged violation has already occurred and is publicly accused, it represents a realized harm rather than a potential one. Therefore, this qualifies as an AI Incident due to the violation of intellectual property rights caused by the use of AI systems.
Thumbnail Image

Anthropic accuses DeepSeek, Moonshot, MiniMax of misusing Claude AI; Elon Musk fires back

2026-02-24
News9live
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (Claude chatbot and other AI models) and their misuse in training other AI systems without authorization, which is a violation of intellectual property rights (harm category c). The allegations describe actual unauthorized use and large-scale interactions, indicating realized harm rather than mere potential. The discussion of possible use in military or surveillance systems further underscores the seriousness of the harm. Elon Musk's counter-accusations add to the context of data misuse within AI development. Given the direct link between AI system misuse and violations of rights, this event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Illegale KI-Distillation: Chinesische Firmen kopieren Claude-Modell

2026-02-24
IT BOLTWISE® x Artificial Intelligence
Why's our monitor labelling this an incident or hazard?
The event involves the misuse of AI systems (Claude model) through illegal distillation, leading to the creation of unprotected AI models that pose significant security risks. These risks include potential harm to national security and human rights through misuse in cyberattacks, disinformation campaigns, and surveillance. Although no direct harm to individuals is reported yet, the described activities constitute a credible and significant threat that could plausibly lead to AI incidents involving harm to communities and violations of rights. Therefore, this qualifies as an AI Hazard due to the plausible future harm stemming from the illegal use and replication of AI capabilities without safeguards.
Thumbnail Image

Anthropic accuses China of 'industrial-scale' attempt to steal Claude's abilities

2026-02-24
Neowin
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (Anthropic's Claude and the Chinese labs' AI models) and describes a large-scale misuse attempt (distillation attacks via fraudulent accounts) that could plausibly lead to harm such as intellectual property theft and unfair competitive advantage. However, the article does not describe any actual harm realized yet, such as legal violations, health or safety impacts, or operational disruptions. Anthropic's response measures to detect and mitigate the attacks are also detailed, but these are reactive and preventive. Thus, the event fits the definition of an AI Hazard, as it plausibly could lead to an AI Incident if the theft succeeds or causes further consequences, but no direct or indirect harm has been confirmed at this stage.
Thumbnail Image

Anthropic Accuses DeepSeek, Moonshot, MiniMax of Cheating Claude-Musk Calls Anthropic the Real Thief

2026-02-25
Republic World
Why's our monitor labelling this an incident or hazard?
Anthropic's claim involves the misuse of an AI system (Claude) to extract outputs illicitly, which could lead to violations of intellectual property rights and unfair competition. Since the article only reports allegations without evidence of actual harm or legal rulings, it fits the definition of an AI Hazard, as the misuse could plausibly lead to an AI Incident if confirmed or if the extracted outputs are used unlawfully. There is no indication of realized harm or ongoing incident, so it is not an AI Incident. It is more than general AI news, so it is not Unrelated or Complementary Information.
Thumbnail Image

Claude vs DeepSeek and Moonshot: How chinese labs "siphoned" 16 million answers to bridge the AI gap

2026-02-25
Digit
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use and misuse of AI systems (Claude and the cloned models) in a manner that has directly led to significant harms: theft of intellectual property (a breach of intellectual property rights), stripping of safety features leading to potential physical and cyber harms, and enabling authoritarian censorship and propaganda (violations of human rights and harm to communities). The large-scale, coordinated nature of the attack and the detailed evidence of harm and misuse confirm this is an AI Incident rather than a hazard or complementary information. The AI system's development and use are central to the harms described, fulfilling the criteria for an AI Incident.
Thumbnail Image

Acusan de plagio por IA

2026-02-25
El Diario de Yucatán
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems: Anthropic's Claude and the models developed by DeepSeek, Moonshot AI, and MiniMax. The misuse involves illicit extraction of AI capabilities via fraudulent accounts and proxies, which is a misuse of AI system outputs and a breach of legal and ethical frameworks protecting intellectual property. The harm includes violation of intellectual property rights and potential risks to national security due to unsafe AI models derived from this illicit distillation. Since the harm is realized (the illicit extraction occurred) and the potential for further harm is significant, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Anthropic Alleges 16M Claude Distillation Campaign

2026-02-25
BetaNews
Why's our monitor labelling this an incident or hazard?
The event involves the use and misuse of an AI system (Claude) by other entities to illicitly extract its capabilities, which directly breaches legal and ethical frameworks protecting intellectual property rights. The unauthorized distillation and replication of AI capabilities without safeguards can lead to significant harms, including enabling malicious cyber activities and disinformation, which affect communities and security. Therefore, this event meets the criteria of an AI Incident due to realized violations and harms linked to the AI system's misuse and unauthorized exploitation.
Thumbnail Image

Anthropic, OpenAI accusano di furto Deepseek: da che pulpito

2026-02-25
Agenda Digitale
Why's our monitor labelling this an incident or hazard?
The article centers on allegations of intellectual property theft and the ethical debate around AI training data usage, which involves AI systems. However, it does not report a concrete AI Incident (no direct or indirect harm has been described as having occurred) nor an AI Hazard (no specific plausible future harm event is described). Instead, it provides contextual and legal analysis, industry perspectives, and calls for regulatory action, fitting the definition of Complementary Information as it enhances understanding of AI ecosystem challenges and governance without reporting a new incident or hazard.
Thumbnail Image

Anthropic dénonce l'exploitation de Claude par trois laboratoires chinois pour entraîner leurs IA

2026-02-25
Begeek.fr
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Claude) and describes its unauthorized exploitation by other entities to train competing AI models, which is a direct violation of intellectual property rights and legal frameworks. The harm is realized as it involves illicit use and potential theft of proprietary AI capabilities, which is a breach of obligations under applicable law protecting intellectual property rights. The presence of a related lawsuit against Anthropic for illegal use of copyrighted material further confirms the occurrence of rights violations. Hence, this event meets the criteria for an AI Incident due to direct involvement of AI system misuse causing legal and rights-related harm.
Thumbnail Image

Anthropic denuncia extracción masiva de Claude: el "momento Napster" de la IA ya llegó

2026-02-25
DiarioBitcoin
Why's our monitor labelling this an incident or hazard?
The event involves the use and misuse of AI systems: Anthropic's Claude is the AI system targeted, and the extraction is done via automated AI-driven querying. The harm is a violation of intellectual property rights through unauthorized extraction and copying of AI capabilities, which is a breach of obligations protecting intellectual property. The article documents that this extraction has already occurred at scale, causing direct harm to Anthropic and potentially to the broader AI ecosystem. Although no physical injury or disruption of critical infrastructure is reported, the harm to intellectual property and economic interests is clear and material. Hence, the event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Anthropic accuse des entreprises chinoises d'avoir copié son intelligence artificielle - Siècle Digital

2026-02-25
Siècle Digital
Why's our monitor labelling this an incident or hazard?
The event involves the use and misuse of AI systems (large language models) and their APIs to extract proprietary knowledge through distillation, which is a form of indirect harm to intellectual property rights and potentially to security (a breach of obligations under applicable law and harm to communities via security risks). The allegations describe realized misuse and harm, not just potential risk, thus fitting the AI Incident category rather than AI Hazard or Complementary Information. The event is not merely a general AI news or product update, but a specific incident involving AI misuse leading to harm.
Thumbnail Image

Chinese firms used distillation attacks to copy Claude, Anthropic says

2026-02-25
Computing
Why's our monitor labelling this an incident or hazard?
The event involves the misuse of AI systems through distillation attacks, which is a form of AI system use leading to harm. The misuse breaches terms of service and regional restrictions, indicating unauthorized use. More importantly, the illicitly distilled models could lack safety guardrails, enabling harmful applications such as cyberattacks and disinformation campaigns, which constitute violations of human rights and security risks. The event describes actual ongoing misuse and its consequences, not just potential future harm. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Anthropic denuncia intento chino de copiar su modelo Claude | Sitios Argentina.

2026-02-25
SITIOS ARGENTINA - Portal de noticias y medios Argentinos.
Why's our monitor labelling this an incident or hazard?
The event involves the use and misuse of AI systems, specifically the AI model Claude, where malicious actors conducted millions of interactions to extract internal knowledge. This constitutes a misuse of AI systems leading to potential harm including violation of intellectual property rights and risks to national security. Although no direct physical harm is reported, the incident involves clear violations of rights and potential significant harms, fitting the definition of an AI Incident due to the realized misuse and its consequences.
Thumbnail Image

Anthropic acusa a DeepSeek y otros laboratorios chinos de robar capacidades de Claude - PasionMóvil

2026-02-26
PasionMovil
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (Claude and the derived models) and their misuse through unauthorized data extraction and model distillation. The harm includes violation of intellectual property rights and potential security risks from unprotected AI capabilities being used in military and surveillance contexts. These harms have already occurred or are ongoing, as Anthropic reports the campaigns have taken place and models have been trained. Thus, this is not merely a potential hazard but an actual incident involving AI misuse causing harm.
Thumbnail Image

I più grandi costruttori di intelligenza artificiale sono ora i suoi più grandi lobbisti

2026-02-25
Forbes Italia
Why's our monitor labelling this an incident or hazard?
While the article involves AI companies and their influence on policy, it does not report any realized harm (AI Incident) or a plausible risk of harm (AI Hazard) stemming from AI system development, use, or malfunction. Instead, it focuses on lobbying efforts, political strategies, and regulatory debates, which fall under societal and governance responses to AI. Therefore, this is best classified as Complementary Information, as it provides context and updates on the AI ecosystem and governance without describing a specific AI Incident or AI Hazard.
Thumbnail Image

Anthropic distillation attacks: AI firm warns of alarming cloning attempts

2026-02-25
Pune Mirror
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (Claude models) and their unauthorized use through model distillation attacks, which is a misuse of AI outputs. The harm is a violation of intellectual property rights, which is one of the recognized harm categories under AI Incidents. The event describes actual ongoing attacks and the company's response, indicating that harm has occurred rather than just a potential risk. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Anthropic Accuses China of Stealing Claude -- The AI Cold War Is Real

2026-02-26
TechnoSports Media Group
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (Claude and the stolen models) and describes a deliberate misuse of AI infrastructure to steal AI capabilities, which constitutes a violation of intellectual property rights (a recognized harm under AI Incident definitions). The harm is realized as the theft has already occurred, not merely a potential risk. The report also discusses the broader implications of such theft, including potential misuse, but the primary classification is based on the confirmed intellectual property violation. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Anthropic's Claim of Distillation Attacks on its Claude Models Builds Around Ongoing AI Supremacy - Tekedia

2026-02-25
Tekedia
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (Claude and other frontier models) and their outputs being used illicitly to train other AI models through distillation attacks, which is a misuse of AI systems. The harms include violations of intellectual property rights and potential national security risks from unsafe AI models lacking guardrails. The large-scale fraudulent activity and circumvention of terms of service directly link the AI systems' use to these harms. Although some harms are potential or indirect, the ongoing unauthorized extraction and replication of AI capabilities constitute realized violations and risks, fitting the definition of an AI Incident. The geopolitical and security implications further underscore the significance of the harm. Hence, the classification as AI Incident is appropriate.
Thumbnail Image

Why AI Companies Are Suddenly Worried About Theft

2026-02-26
NYMag
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (large language models and their proprietary capabilities) and concerns their misuse through model distillation attacks. The harms described include potential violations of intellectual property rights, threats to national security, and possible malicious uses of AI capabilities, all of which fit within the definitions of AI-related harms. Since the article focuses on ongoing attempts and the potential consequences rather than confirmed incidents of harm, it aligns with the definition of an AI Hazard rather than an AI Incident. The article also discusses the broader implications and responses but does not primarily focus on those, so it is not Complementary Information. Hence, the classification as AI Hazard is appropriate.
Thumbnail Image

Anthropic gegen China: Hat Deepseek von Claude abgeschrieben?

2026-02-27
Neue Zürcher Zeitung
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (Anthropic's Claude and Chinese AI models) and concerns their use and alleged misuse (massive API calls to extract knowledge). However, the article does not describe any actual harm occurring to people, infrastructure, rights, property, or communities. The allegations remain unproven and focus on competitive and legal issues rather than realized harm. The discussion includes expert opinions, legal context, and strategic implications, which enrich understanding of AI ecosystem challenges. Hence, it fits the definition of Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

Anthropic accuses China AI firms of model mining

2026-02-26
Daily Mirror
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (Claude models and the Chinese companies' models) and describes misuse of AI capabilities through unauthorized distillation, which is a form of AI system use leading to harm. The harms include violation of intellectual property rights and potential national security risks due to unsafe AI model replication. The misuse is ongoing and has directly led to harm (theft of proprietary AI capabilities and circumvention of access controls). Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

US artificial intelligence developers accuse Chinese firms of steal...

2026-02-26
Computer Weekly
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (Claude, ChatGPT, Gemini) and the adversarial use of AI techniques (distillation attacks) to extract proprietary model capabilities and data without permission. This unauthorized extraction and use of data constitutes a breach of intellectual property rights, which is a recognized harm under the AI Incident definition. The harm is realized and ongoing, as the article details large-scale fraudulent account creation and extensive data exfiltration. Therefore, this qualifies as an AI Incident due to violation of intellectual property rights caused by the development and use of AI systems.
Thumbnail Image

Elon Musk just made things very uncomfortable for Anthropic

2026-02-26
Belleville News-Democrat
Why's our monitor labelling this an incident or hazard?
The article involves AI systems and their development and use, specifically regarding training data and model capability extraction. The accusations relate to unauthorized data use and potential copyright infringement, which are legal and ethical issues tied to AI development. However, there is no clear evidence of realized harm such as injury, rights violations, or operational disruption caused by AI systems in this event. The harms discussed are potential or legal in nature, and the article mainly reports on the dispute and its implications for the AI industry rather than a specific harmful incident or an imminent hazard. Thus, it fits the definition of Complementary Information, providing context and updates on AI-related legal and competitive challenges.
Thumbnail Image

KI-Konflikt: US-Konzerne werfen chinesischen Firmen Datendiebstahl vor

2026-02-26
Euronews Deutsch
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (large language models/chatbots) and their misuse through model extraction attacks. The misuse has directly led to intellectual property theft and the creation of AI models lacking safety controls, which poses a credible risk of harm to national security and potentially other harms. The event involves the use and misuse of AI systems leading to significant harm, fulfilling the criteria for an AI Incident rather than a hazard or complementary information. The harm is not merely potential but actively occurring, as the attacks are ongoing and have been detected by the victim companies.
Thumbnail Image

Guerra fría de la IA, empresas de EEUU acusan robo de I+D

2026-02-26
Euronews Español
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (large language models/chatbots) and their use in a model extraction attack that directly impacts the development and competitive landscape of AI technology. The unauthorized extraction and use of AI-generated data to train competing models constitutes a violation of intellectual property rights, which is a recognized harm under the AI Incident definition. Additionally, the lack of safeguards in the distilled models raises plausible risks to national security, including misuse in cyberattacks or biological weapons development, indicating indirect harm. The article reports ongoing activities and warnings from major AI companies, confirming realized harm rather than just potential risk. Hence, the event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Guerre froide de l'IA : des groupes US dénoncent un vol chinois

2026-02-26
euronews
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (large language models/chatbots) and describes their misuse through model extraction attacks. Although no direct harm to users or infrastructure is reported, the potential for these distilled models to be used maliciously (e.g., for biological weapons or cyberattacks) constitutes a credible risk of future harm. This aligns with the definition of an AI Hazard, as the event plausibly could lead to an AI Incident involving harm to communities or national security. The event does not describe realized harm but highlights a credible threat stemming from AI misuse, so it is classified as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Guerra fredda dell'IA, accuse USA: aziende cinesi hanno rubato studi

2026-02-26
euronews
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (chatbots and large language models) and their use in model extraction attacks, which is a misuse of AI technology. The article focuses on the potential security risks and national security concerns arising from these activities, which could plausibly lead to harms such as weaponization or cyberattacks. Since no actual harm or incident is reported, and the focus is on the potential for harm and ongoing detection, this fits the definition of an AI Hazard rather than an AI Incident. It is not merely complementary information because the main narrative centers on the risk and misuse itself, not on responses or ecosystem context. It is not unrelated because AI systems and their misuse are central to the event.
Thumbnail Image

Guerra Fria da IA: empresas dos EUA acusam chinesas de roubo

2026-02-26
euronews
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (large language models/chatbots) and describes their use in a manner that could plausibly lead to significant harms, including violations of intellectual property rights and potential misuse for harmful applications. Although the alleged model extraction is ongoing and unauthorized, no direct harm such as injury, disruption, or rights violations has been reported as having occurred yet. The focus is on the potential for harm and the security risks posed by these distilled models lacking safeguards. Hence, the event fits the definition of an AI Hazard, as it plausibly could lead to an AI Incident in the future but does not yet describe a realized harm.
Thumbnail Image

Cuando una IA intenta "copiar" a otra: la alerta de Anthropic sobre ataques de destilación contra Claude

2026-02-26
WWWhat's new
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Claude) being exploited via automated, large-scale querying to extract its capabilities illicitly. This exploitation is a misuse of the AI system's outputs, violating legal and contractual rights (intellectual property and terms of service), which is a breach of obligations protecting intellectual property rights. Furthermore, the potential for these extracted models to be deployed without safety safeguards in critical areas like military or surveillance implies harm to communities and security. The misuse is ongoing and has already caused harm by undermining protections and controls, meeting the criteria for an AI Incident rather than a mere hazard or complementary information.
Thumbnail Image

Anthropic accuses chinese AI firms of distillation attacks on Claude

2026-02-26
MediaNama
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems and their use in a manner that could plausibly lead to harm, such as proliferation of unsafe AI capabilities and misuse by authoritarian regimes. Although the distillation attacks are alleged and no direct harm has been documented, the potential for significant future harm is credible and clearly articulated. The event does not describe a response or update to a past incident, so it is not Complementary Information. It is more than general AI news or product updates, so it is not Unrelated. Therefore, the classification as an AI Hazard is appropriate.
Thumbnail Image

China AI Firms Accused of Stealing Anthropic Tech via 'Distillation' Attacks - News Directory 3

2026-02-26
News Directory 3
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (large language models like Claude) and their use (querying and distillation). The alleged activity could plausibly lead to harms such as intellectual property violations (a breach of intellectual property rights), erosion of competitive advantage, and the proliferation of unsafe AI models capable of malicious acts. However, the article does not describe any actual harm or incident that has already occurred; it focuses on accusations and potential consequences. Thus, it fits the definition of an AI Hazard, as the development and use of AI distillation techniques could plausibly lead to an AI Incident in the future if unaddressed.
Thumbnail Image

"أنثروبيك" الأميركية للذكاء الاصطناعي تتّهم شركات صينية منافسة باستخدام نموذجها "كلود" لتطوير قدراتها

2026-02-23
France 24
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (Anthropic's Claude and the competing companies' AI models) and describes the unauthorized use of AI outputs to develop competing AI capabilities, which is a direct violation of intellectual property rights. The harm is realized as it involves theft of proprietary AI technology and potential security risks from unregulated AI model development. This fits the definition of an AI Incident under violations of intellectual property rights and breach of legal protections. The involvement is through the use and development of AI systems, and the harm is direct and significant, not merely potential or speculative.
Thumbnail Image

أنثروبيك تتهم 3 شركات صينية بممارسة "تقطير المعرفة" | صحيفة الخليج

2026-02-24
صحيفة الخليج
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (Anthropic's Claude and other AI models) and describes the misuse of these systems by other companies to unlawfully extract knowledge and replicate AI capabilities. This misuse constitutes a violation of intellectual property rights, which is a breach of obligations under applicable law protecting intellectual property rights, fitting the definition of an AI Incident. Additionally, the article highlights the security risks posed by such unauthorized replication, reinforcing the seriousness of the harm. Hence, the event is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

شركة أنثروبيك: شركات صينية استغلت كلود لتحسين نماذجها بطريقة غير مشروعة

2026-02-23
صوت بيروت إنترناشونال
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use and misuse of an AI system (Anthropic's Claude) by other entities to improve their own AI models without authorization. This misuse directly relates to the development and use of AI systems and is linked to potential significant harms, including national security risks and the uncontrolled spread of powerful AI capabilities. Although no direct physical harm is reported, the described unauthorized distillation and potential open-source release of these models constitute a violation of legal and security frameworks, fitting the definition of an AI Incident due to violations of obligations under applicable law and significant harm to communities and security.
Thumbnail Image

أنثروبيك تتهم شركات صينية باستغلال Claud لتعزيز نماذجها

2026-02-23
مانكيش نت
Why's our monitor labelling this an incident or hazard?
The article involves AI systems explicitly (Anthropic's Claude and other AI models) and discusses misuse of AI capabilities (distillation) by third parties. While this misuse is unauthorized and poses a significant security threat, the article does not document any actual harm occurring yet. The focus is on the potential for harm and the need for regulatory measures to prevent it. Therefore, this event fits the definition of an AI Hazard, as it plausibly could lead to an AI Incident if the misuse continues or escalates.
Thumbnail Image

أنثروبيك تتهم شركات صينية باستغلال Claud لتحسين نماذجها

2026-02-23
Asharq News
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (Anthropic's Claude and other AI models) and describes unauthorized use (exploitation) of these AI systems by other companies. The use of model distillation to copy capabilities is a form of AI misuse that could lead to significant harms, including security risks and violation of intellectual property rights. However, the article does not describe any actual harm or incident that has occurred yet, only the potential for harm and the need for controls. Thus, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

آنثروبيك تتهم شركات صينية باستغلال "كلود" في تدريب نماذجها للذكاء الاصطناعي

2026-02-24
Aljazeera
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (Claude and ChatGPT) and their misuse by other companies through AI model distillation, which is a form of AI system use leading to harm. The harm includes violation of intellectual property rights and potential national security risks, which fall under the definition of AI Incident (c) violations of rights and (e) other significant harms. The misuse is direct and has already occurred, as evidenced by the millions of conversations generated and the public accusations. Hence, this is not a mere potential hazard or complementary information but an AI Incident.
Thumbnail Image

"أنثروبيك" تتّهم شركات ذكاء اصطناعي صينية منافسة بسرقة بياناتها

2026-02-24
العربي الجديد
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (Anthropic's 'Claude' and the derived models by Chinese companies) and their development/use through 'distillation'. The alleged unauthorized extraction of AI capabilities and the resulting lack of security controls plausibly increase the risk of misuse, including in dangerous applications like biological weapons or cyberattacks, which are significant harms. Since the article does not describe actual realized harm but warns of credible future risks, the event fits the definition of an AI Hazard rather than an AI Incident. It is not merely complementary information because the main focus is on the potential threat and misuse risks, not on responses or ecosystem context. Hence, the classification is AI Hazard.
Thumbnail Image

"أنثروبيك" تتهم 3 شركات صينية بـ "تقطير المعرفة" | صحيفة الخليج

2026-02-24
صحيفة الخليج
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems and their development/use through knowledge distillation, which is an AI technique. The unauthorized use of Anthropic's AI system by other companies is described as a violation and a security risk, but no actual harm or incident has been reported yet. The concerns are about potential future harms, including risks to national security, which fits the definition of an AI Hazard. There is no indication that harm has already occurred, so it cannot be classified as an AI Incident. It is not merely complementary information because the main focus is on the risk and misuse itself, not on responses or ecosystem context. Hence, AI Hazard is the appropriate classification.
Thumbnail Image

شركات صينية تسرق عقل Claude.. أنثروبيك تكشف عملية تجسس رقمي بمليوني تفاعل مزيف

2026-02-24
الوفد
Why's our monitor labelling this an incident or hazard?
The event involves the use and misuse of an AI system (Anthropic's Claude chatbot) by other AI companies to extract and replicate its capabilities without authorization. This misuse directly leads to violations of intellectual property rights and poses risks to national security, which are harms under the AI Incident definition. The large-scale, organized nature of the operation and the resulting unauthorized training of competing AI models demonstrate direct harm caused by the AI system's misuse. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

"أنثروبيك" تتهم شركات صينية باستخراج بيانات من نموذجها "كلود"

2026-02-24
موقع عرب 48
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Anthropic's Claude model) and describes how its capabilities were extracted illicitly by other companies using AI techniques (distillation). This unauthorized extraction led to a violation of intellectual property rights and export control laws, which are recognized harms under the AI Incident definition (specifically under violations of intellectual property rights and breach of applicable law). The harm is realized, not just potential, as the companies conducted millions of interactions via fake accounts to replicate the model's capabilities. The event also raises national security concerns, reinforcing the seriousness of the harm. Hence, the classification as an AI Incident is appropriate.
Thumbnail Image

صراع أنثروبيك وDeepSeek: اتهامات بسرقة بيانات Claude وتطوير نماذج منافسة - اليوم السابع

2026-02-24
اليوم السابع
Why's our monitor labelling this an incident or hazard?
The event centers on the alleged unauthorized use of an AI system's outputs (Claude) to train competing models, which is a misuse of AI development and use. While the harm is not yet realized, Anthropic explicitly warns about the risk of unsafe AI models emerging from this practice, which could lead to misuse and harm. The involvement of AI systems is clear, and the potential for violation of intellectual property rights and safety concerns is significant. Since no actual harm has been reported but plausible future harm is credible, the event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

أنثروبيك" للذكاء الاصطناعي تتّهم شركات صينية باستخدام نموذجها "كلود"

2026-02-24
S A N A
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (Anthropic's Claude and the Chinese companies' AI models) and the use of AI outputs to develop competing AI systems without authorization. This constitutes a violation of intellectual property rights, a recognized form of harm under the AI Incident definition. The harm is realized, not merely potential, as the unauthorized use has already occurred. Hence, the event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

"أنثروبيك" الأميركية تتهم 3 شركات صينية بسرقة الملكية الفكرية

2026-02-24
albiladpress.com
Why's our monitor labelling this an incident or hazard?
The event involves the use and misuse of AI systems (Anthropic's Claude and the Chinese companies' models) and describes realized harm in the form of intellectual property theft, which is a violation of intellectual property rights under applicable law. The unauthorized use of the AI system to copy capabilities and train competing models constitutes a breach of legal protections and terms of service. Additionally, the event raises concerns about national security risks from uncontrolled dissemination of AI capabilities. Since the harm (intellectual property theft and associated risks) has already occurred and is central to the report, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

3 شركات صينية استغلت برنامج "كلود" لتحسين نماذجها بطريقة غير مشروعة

2026-02-24
اندبندنت عربية
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use and misuse of an AI system ('Claude') by companies to illegally extract capabilities to train their own models, violating terms of service and regional restrictions. The misuse has already occurred (16 million interactions with fake accounts), and the resulting AI models lacking safeguards pose serious security risks, which is a form of harm to communities and national security. The direct involvement of AI systems in this misuse and the resulting harms meet the criteria for an AI Incident rather than a mere hazard or complementary information.
Thumbnail Image

شركات أمريكية تتهم منافسين صينيين بسرقة أبحاث بمليارات الدولارات - اليوم السابع

2026-02-27
اليوم السابع
Why's our monitor labelling this an incident or hazard?
The event involves the use and misuse of AI systems (AI chatbots and model distillation techniques) leading to alleged intellectual property theft and potential future harms related to misuse of AI for dangerous applications. The unauthorized extraction of AI model outputs via fake accounts constitutes misuse of AI systems, and the potential for resulting models to be used in harmful ways (e.g., biological weapons, cyberattacks) represents a plausible risk of significant harm. Since the article reports ongoing unauthorized activities and highlights credible risks of future harm, but does not report actual realized harm yet, this qualifies as an AI Hazard rather than an AI Incident. The involvement of AI systems is explicit, and the potential for harm is clearly articulated, meeting the criteria for an AI Hazard.
Thumbnail Image

Anthropic、DeepSeekなど中国AI企業3社による「大規模な蒸留攻撃」を報告 国家安全保障リスクを警告

2026-02-24
ITmedia
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (Anthropic's Claude) and the misuse of its outputs by other AI companies to create unauthorized models. This misuse directly leads to violations of usage policies and raises national security concerns, which constitute significant harms under the framework (harm to communities and breach of legal obligations). The large-scale nature of the attack and the potential for the resulting models to be used in harmful ways (bioweapons, cyberattacks) confirm the presence of realized and ongoing harm. The involvement of AI system development and use in this context meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

中国AI3社がClaudeに大規模蒸留 Anthropicが指摘する「重大なリスク」

2026-02-25
ITmedia
Why's our monitor labelling this an incident or hazard?
The event involves the misuse of an AI system (Anthropic's Claude) through unauthorized large-scale distillation by other AI companies, which is a form of AI system use leading to potential harms. The harms include violation of intellectual property rights, risks to national security, and the plausible future use of unsafe AI models in harmful military or surveillance applications. Although direct harm has not yet occurred, the article highlights credible risks of significant harm resulting from these actions. Therefore, this constitutes an AI Hazard due to the plausible future harms stemming from the misuse and unauthorized replication of AI capabilities without safety controls.
Thumbnail Image

Anthropic、中国AI企業3社から蒸留攻撃を受けたと告発 - エキサイトニュース

2026-02-24
Excite
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (Claude and the accused companies' models) and describes unauthorized use and extraction of AI capabilities through a large-scale attack. This misuse directly breaches intellectual property rights and contractual terms, which falls under violations of rights as defined in the AI Incident framework. Since the harm (violation of rights and unauthorized extraction) has already occurred, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

米アンソロピック、中国ディープシークなどにAIモデルを盗用されたと告発 -- 仏メディア - エキサイトニュース

2026-02-24
Excite
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (Anthropic's Claude and the accused companies' AI models) and their development and use. The unauthorized large-scale interaction and data extraction to train competing models directly violate intellectual property rights, a recognized harm under the AI Incident definition. Additionally, the manipulation of AI responses related to sensitive political topics suggests potential human rights violations. The harm is realized, not merely potential, as the unauthorized use and data extraction have already occurred. Therefore, this event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Anthropic、国防総省との会談後も軍事AI利用制限を堅持=ロイター 執筆: Investing.com

2026-02-24
Investing.com 日本
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Anthropic's AI technology) and its potential use in military applications, which could plausibly lead to significant harms such as violations of human rights or harm to communities if used for autonomous weapons or surveillance. Since no harm has yet occurred and the article discusses the potential and regulatory dispute around future use, this qualifies as an AI Hazard. It is not an AI Incident because no direct or indirect harm has materialized. It is not Complementary Information because the article does not provide updates or responses to a past incident but rather focuses on the ongoing dispute and potential risks. Therefore, the classification is AI Hazard.
Thumbnail Image

Anthropicが「中国AI企業のDeepSeek・Moonshot・MiniMaxは不正にClaudeの能力を抽出している」と非難

2026-02-24
GIGAZINE
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (Claude and the accused companies' AI models) and their use and misuse. The unauthorized large-scale extraction of Claude's capabilities via fraudulent accounts and sessions directly breaches terms of service and access restrictions, constituting misuse of AI systems. This misuse leads to a violation of intellectual property rights, which is a recognized harm under the AI Incident definition. The event describes realized harm (not just potential), including unauthorized use and competitive disadvantage, thus qualifying as an AI Incident rather than a hazard or complementary information. The detailed description of the campaigns and their scale confirms the direct involvement of AI systems and the resulting harm.
Thumbnail Image

「Grokを機密システムで使用する契約」をxAIと国防総省が締結か

2026-02-24
GIGAZINE
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (xAI's Grok) being contracted for use in classified military systems by the U.S. Department of Defense. The AI's use in intelligence and weapons development is directly linked to potential harms such as violations of human rights and disruption of security. While no actual harm is reported, the plausible future harm from AI use in military and surveillance contexts is well recognized. The event does not describe a realized harm or incident but highlights a credible risk and strategic shift in AI deployment in defense. Hence, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

「Claudeの制限撤廃か関係断絶か」とヘグセス国防長官がAnthropicに警告

2026-02-25
GIGAZINE
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Claude) used in sensitive military contexts. The dispute centers on the potential removal of safety restrictions that currently prevent harmful uses like mass surveillance or autonomous weapons development. The DoD's threat to forcibly remove these restrictions or sever ties indicates a credible risk of future harm stemming from the AI system's use. However, no actual harm or incident has occurred yet; the article discusses a credible potential for harm if the AI system's use is expanded without safeguards. Thus, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Anthropic、他AIラボによる大規模な「蒸留攻撃」検出と対策を公表

2026-02-24
CodeZine
Why's our monitor labelling this an incident or hazard?
The event involves the misuse of AI systems (Claude and others) in a way that could plausibly lead to significant harms, including national security risks due to unregulated AI models created from stolen capabilities. Since no direct harm has yet occurred but the risk is credible and significant, this qualifies as an AI Hazard. The article also includes calls for coordinated mitigation, but the main focus is on the detection and risk of the attack, not on a response to a past incident. Therefore, it is not an AI Incident or Complementary Information, but an AI Hazard.
Thumbnail Image

AnthropicがAIの責任ある拡張方針「RSP」最新版を公開

2026-02-25
CodeZine
Why's our monitor labelling this an incident or hazard?
The article does not describe any actual or potential harm caused by AI systems, nor does it report on any malfunction or misuse leading to harm. Instead, it focuses on a governance and risk management framework designed to reduce AI risks and improve transparency. Therefore, it constitutes Complementary Information as it provides context and updates on societal and governance responses to AI risks without reporting a new AI Incident or AI Hazard.
Thumbnail Image

Anthropicが安全対策の制限を撤回することを決定

2026-02-26
GIGAZINE
Why's our monitor labelling this an incident or hazard?
Anthropic's withdrawal of a safety pledge and relaxation of training restrictions on AI systems directly involves the development and use of AI. The involvement of the U.S. Department of Defense and the threat of contract termination highlight the high stakes and potential for increased risk. Although no actual harm or incident is reported, the removal of safety constraints plausibly increases the risk of AI-related harm in the future. This fits the definition of an AI Hazard, as it is a circumstance where AI system development and use could plausibly lead to harm, but no harm has yet materialized. The event is not Complementary Information because it is not merely an update or response to a past incident but a significant policy change with potential future risk. It is not an AI Incident because no harm has occurred yet.
Thumbnail Image

身元不明のハッカー、Claude悪用してメキシコ政府にサイバー攻撃 膨大な個人情報を窃取 海外報道

2026-02-26
ITmedia
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system (Claude) manipulated by a hacker to conduct a cyberattack that led to the theft of sensitive government data, which constitutes harm to individuals' privacy and a violation of rights. The AI system's misuse directly contributed to the incident, fulfilling the criteria for an AI Incident. The report also mentions the use of another AI system (ChatGPT) to assist in the attack, reinforcing the AI involvement. The harm is realized, not just potential, as the data breach has occurred. Therefore, this event is classified as an AI Incident.
Thumbnail Image

正体不明のハッカーがAI「Claude」を使い1億9500万件の納税者記録など150GBのメキシコ政府データを盗み出したことが発覚

2026-02-26
GIGAZINE
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (Claude) in the malicious exploitation of government networks, resulting in the theft of sensitive personal and governmental data. The AI system was used to identify vulnerabilities, create attack scripts, and automate data theft, which directly caused harm to individuals and institutions through privacy violations and unauthorized data access. This fits the definition of an AI Incident as the AI system's use directly led to harm (violation of rights and harm to communities through data breach). Although some government bodies deny the breach, the cybersecurity firm's detailed report and the scale of data stolen support the classification as an AI Incident.
Thumbnail Image

Anthropicのダリオ・アモデイCEOがAI安全保障問題で国防総省の要求を拒否

2026-02-27
GIGAZINE
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Anthropic's Claude) used by the DoD, with safety measures designed to prevent harmful uses such as large-scale domestic surveillance and fully autonomous weapons. The DoD's demand to remove these safety measures and the threat to designate Anthropic as a supply chain risk if they refuse creates a credible risk that the AI could be used in ways that violate human rights or cause harm. No actual harm has been reported yet, but the plausible future harm from forced removal of safety controls and misuse in military contexts fits the definition of an AI Hazard. The event is not merely general AI news or a response update, but a significant governance and ethical conflict with potential for harm.
Thumbnail Image

トランプ大統領、Claudeを政府内で使用禁止に--運営元Anthropicを「極左」と非難 

2026-02-27
CNET
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Anthropic's Claude) and discusses its use within government and military contexts. The conflict arises from the intended use of the AI for large-scale domestic surveillance and autonomous weapons, which are known to pose serious risks of harm including privacy violations and potential human rights abuses. Although these harms are not reported as having occurred yet, the government's push to override ethical restrictions and the company's refusal highlight a credible risk that such harms could materialize if the AI is used as intended by the government. This fits the definition of an AI Hazard, as the event plausibly could lead to an AI Incident involving violations of rights and harm to communities. There is no indication that harm has already occurred, so it is not an AI Incident. The article is not merely complementary information or unrelated news, as it focuses on a significant conflict with potential for serious AI-related harm.
Thumbnail Image

トランプ大統領、Anthropicを「極左の意識高い系企業」と非難し 政府機関での製品使用を即時停止

2026-02-27
ITmedia
Why's our monitor labelling this an incident or hazard?
The article discusses the political and ethical dispute over AI safety safeguards in military AI contracts, involving AI system use and governance. There is no report of actual harm or malfunction caused by AI systems, nor a specific incident of harm. The focus is on policy decisions, company stances, and public statements, which provide important context and updates on AI governance and potential risks but do not describe a realized AI Incident or an immediate AI Hazard. Hence, it fits the definition of Complementary Information, as it enhances understanding of AI ecosystem developments and governance responses without reporting direct or imminent harm.
Thumbnail Image

Șeful de la Anthropic, convocat la Pentagon pentru refuzul de a pune sistemul său de AI la dispoziția armatei SUA - HotNews.ro

2026-02-23
HotNews.ro
Why's our monitor labelling this an incident or hazard?
The article describes a situation where an AI system's use is being contested due to ethical concerns about harmful applications (mass surveillance and lethal autonomous weapons). While the AI system is involved and the potential for harm exists, no harm has yet occurred. The event is about the plausible future risk of harm from certain uses of the AI system and the governance and ethical decisions surrounding it. Therefore, this qualifies as an AI Hazard, as it concerns plausible future harms related to the AI system's deployment and use, but no incident (realized harm) has taken place.
Thumbnail Image

Anthropic publică "Constituția lui Claude" și deschide o nouă dezbatere despre etica AI

2026-02-21
Evenimentul Zilei
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Claude) and its training process, but it does not describe any realized harm or plausible imminent harm caused by the AI system. The focus is on ethical guidelines, philosophical debates, and governance strategies, which are responses and contextual developments in the AI ecosystem. There is no indication of direct or indirect harm, nor a credible risk of harm stemming from the AI system's use or malfunction described here. Hence, the event fits the definition of Complementary Information, as it enhances understanding of AI ethics and governance without reporting a new incident or hazard.
Thumbnail Image

Cum înveți un chatbot să fie "bun"? Experimentul Anthropic cu "Constituția" lui Claude

2026-02-20
Mediafax.ro
Why's our monitor labelling this an incident or hazard?
The article does not describe any event where the AI system Claude caused or could plausibly cause harm. Instead, it details the ethical framework and alignment process Anthropic uses to train Claude to behave safely and beneficially. It also discusses governance challenges and industry context. Since it provides supporting context and updates on AI safety and governance without reporting harm or plausible harm, it fits the definition of Complementary Information.
Thumbnail Image

Anthropic acuză trei laboratoare chineze de AI că au extras masiv capabilitățile lui Claude pe fondul disputelor SUA‑China privind exportul de cipuri

2026-02-23
ziarulnational.md
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (Claude and the competing AI models) and describes the misuse of AI through distillation attacks that have already occurred, leading to intellectual property violations and potential national security harms. The misuse of AI capabilities to create unauthorized copies and the risk of these copies being used for harmful purposes (cyber operations, disinformation, surveillance) fits the definition of an AI Incident, as harm to intellectual property rights and potential harm to communities and national security are realized or ongoing. The involvement of AI systems is clear, and the harms are direct or indirect consequences of AI misuse, not merely potential or speculative risks.
Thumbnail Image

Anthropic în centrul unei decizii cruciale la Pentagon

2026-02-23
ziarulevenimentul.ro
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Anthropic's AI models) and discusses its potential military use. The focus is on negotiations about ethical boundaries and safeguards to prevent misuse, such as autonomous weapons or surveillance, which are credible risks. However, no actual harm or incident has been reported; the event is about preventing or managing plausible future harms. Thus, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because it clearly involves AI and its potential impacts.
Thumbnail Image

Cum înveți un chatbot să fie 'bun'? Experimentul Anthropic cu 'Constituția' lui Claude - Stiripesurse.md

2026-02-21
Stiripesurse.md
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Claude) and its ethical training framework, but it does not describe any realized harm or incident caused by the AI system. Nor does it describe a plausible future harm event. Instead, it details the development and governance approach to align the AI system with ethical principles, which is a form of complementary information about AI system development and governance. Therefore, it fits the definition of Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

Șeful Anthropic, convocat de Hegseth, după ce a refuzat să-și pună la dispoziția Pentagonului sistemul AI, Claude

2026-02-23
euronews.ro: Știri de ultimă oră, breaking news, #AllViews
Why's our monitor labelling this an incident or hazard?
The article describes a dispute over the use of an AI system (Claude) by the US Department of Defense, with the CEO refusing to allow its use for autonomous weapons or surveillance. While the AI system is involved and its use in military operations is mentioned, there is no indication that any harm has occurred or that the refusal itself caused harm. The event centers on governance, ethical stances, and potential contractual consequences, which fits the definition of Complementary Information as it provides context and updates on AI governance and responses rather than describing an AI Incident or Hazard.
Thumbnail Image

Συμφωνία Μασκ - Πενταγώνου για τη χρήση του Grok σε διαβαθμισμένα συστήματα

2026-02-24
NEWS 24/7
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (Grok, Claude, Gemini, ChatGPT) in military classified systems, which clearly qualifies as AI system involvement. However, there is no indication that any harm has occurred yet due to the use or malfunction of these AI systems. The article discusses potential risks and concerns about security and access to sensitive data, which could plausibly lead to harm in the future, but no actual incident or harm is reported. Therefore, this event fits the definition of an AI Hazard, as it plausibly could lead to harm given the sensitive nature of military AI applications, but no direct or indirect harm has materialized at this point.
Thumbnail Image

Το Πεντάγωνο δίνει τελεσίγραφο στις εταιρείες Τεχνητής Νοημοσύνης: Ζητά "πλήρη πρόσβαση" ανευ ορίων

2026-02-24
newsbomb.gr
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Claude) used in sensitive military operations, indicating AI system involvement. The conflict centers on the use and potential modification of this AI system's capabilities, which could lead to harm if safety restrictions are removed. Although no actual harm is reported yet, the threat to remove safeguards and the Pentagon's pressure to gain unrestricted access create a credible risk of future harm, including injury, disruption, or rights violations. This fits the definition of an AI Hazard, as the event plausibly could lead to an AI Incident. It is not an AI Incident yet because no harm has materialized, nor is it merely Complementary Information or Unrelated, as the focus is on a credible risk stemming from AI system use in military contexts.
Thumbnail Image

Η xAI του Musk υπογράφει συμφωνία με το Πεντάγωνο εν μέσω διαφοράς με την Anthropic Πηγή: Investing.com

2026-02-24
Investing.com Ελληνικά
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (the Grok model by xAI and Claude by Anthropic) in classified military applications, which clearly qualifies as AI system involvement. Although no direct harm or incident is reported, the deployment of AI in weapons development and battlefield operations plausibly could lead to harms such as injury, violation of human rights, or disruption of critical infrastructure. The article highlights a dispute over usage restrictions, indicating concerns about ethical and legal compliance, further underscoring potential risks. Since no actual harm has yet occurred but the potential for significant harm is credible, this event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Η Anthropic παραμένει σταθερή στους περιορισμούς για στρατιωτική χρήση τεχνητής νοημοσύνης μετά από συνάντηση με το Πεντάγωνο - Reuters Πηγή: Investing.com

2026-02-24
Investing.com Ελληνικά
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (Anthropic's AI technology) and its development and use restrictions related to military applications. However, no actual harm or incident has occurred yet. The event concerns a potential risk or hazard related to the use of AI in autonomous weapons and surveillance, which could plausibly lead to harms such as violations of human rights or harm to communities if the AI were used in these ways. Since the event is about the potential for harm and regulatory responses rather than a realized harm, it qualifies as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Anthropic: Κινέζικες εταιρείες φέρεται να εκμεταλλεύτηκαν το chatbot της για να βελτιώσουν τα δικά τους μοντέλα

2026-02-24
ertnews.gr
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use and misuse of an AI system (the Claude chatbot) by third parties to gain unauthorized access and improve competing AI models. This misuse constitutes a violation of intellectual property rights, which is a recognized form of harm under the AI Incident definition (c). Additionally, the potential for resulting AI models to lack safety controls introduces risks related to malicious use, further supporting the classification as an AI Incident. Since actual unauthorized use and harm (intellectual property theft) have occurred, this is not merely a potential hazard or complementary information but a realized AI Incident.
Thumbnail Image

Οι ΗΠΑ ζητούν από την Anthropic να επιτρέψει την αυτόνομη στρατιωτική χρήση της Τεχνητής της Νοημοσύνης | in.gr

2026-02-24
in.gr
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the AI system Claude by Anthropic, which is used by the US military and is involved in operations that include violence and surveillance, both of which constitute harm to persons and violations of rights. The pressure from the US government to remove safety restrictions enabling autonomous military use indicates the AI system's use and potential malfunction or misuse leading to harm. The reported use of Claude in a military operation involving bombings and capture attempts further confirms realized harm. The resignation of a security official with a warning about global risk adds to the gravity of the situation. These factors meet the criteria for an AI Incident, as the AI system's use has directly or indirectly led to significant harms.
Thumbnail Image

Η Anthropic σε σύγκρουση με το Πεντάγωνο για τη χρήση της AI σε στρατιωτικές εφαρμογές | Η ΚΑΘΗΜΕΡΙΝΗ

2026-02-24
H Kαθημερινή
Why's our monitor labelling this an incident or hazard?
The article involves an AI system developed by Anthropic and its potential use in autonomous weapons and surveillance, which are high-risk applications with plausible future harms including injury, human rights violations, and other significant harms. Although no incident (harm) has occurred yet, the conflict and the Pentagon's pressure indicate a credible risk that the AI technology could be used in ways leading to harm. Hence, this qualifies as an AI Hazard rather than an AI Incident or Complementary Information, as the focus is on potential future harm and regulatory conflict rather than realized harm or responses to past incidents.
Thumbnail Image

Anthropic: Κατηγορεί τρεις κινεζικούς "κολοσσούς" για βιομηχανική κατασκοπεία

2026-02-24
Liberal.gr
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (Claude and competing AI models) and describes a large-scale misuse of AI capabilities through systematic unauthorized interactions to replicate the model's functionality. This constitutes a violation of intellectual property rights, a recognized harm under the framework. Additionally, the potential for these replicated models to be used by authoritarian regimes for harmful purposes such as cyberattacks or biological weapons elevates the severity of the incident. The direct link between AI system misuse and these harms justifies classification as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Anthropic: "Πόλεμος" με την Κίνα για την κλοπή εμπορικών μυστικών του Claude

2026-02-24
PCMag Greece
Why's our monitor labelling this an incident or hazard?
The event involves the use and misuse of an AI system (Claude) leading to the unauthorized extraction and use of proprietary AI capabilities by Chinese developers, which directly constitutes a violation of intellectual property rights. The large-scale creation of fake accounts and extensive API usage to clone the AI model is a clear harm caused by AI system misuse. This meets the criteria for an AI Incident because the harm (violation of intellectual property rights and commercial damage) is realized and directly linked to the AI system's use. The article does not merely discuss potential future harm or general AI developments but reports an ongoing harmful event involving AI misuse.
Thumbnail Image

Τελεσίγραφο Πενταγώνου στην Anthropic για τη χρήση AI σε στρατιωτικές εφαρμογές

2026-02-25
NewsIT
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (large language models) and their potential military use, which is currently restricted by Anthropic but pressured by the Pentagon. No actual harm or incident has occurred yet, but the dispute highlights the plausible risk of AI being used in autonomous weapons or surveillance, which could lead to serious harms including violations of human rights or physical harm. The event is about the potential future use and regulatory conflict, fitting the definition of an AI Hazard rather than an Incident or Complementary Information. It is not unrelated because it directly concerns AI systems and their governance in a high-stakes context.
Thumbnail Image

Γιατί η εταιρεία τεχνητής νοημοσύνης Anthropic συγκρούεται με τις ΗΠΑ για στρατιωτικό συμβόλαιο Πηγή: Euronews

2026-02-25
Investing.com Ελληνικά
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (Anthropic's AI chatbot Claude) and their development and intended use in military contexts. The conflict and threats from the U.S. government indicate a credible risk that unrestricted military use of AI could lead to harms such as autonomous weapons deployment and mass surveillance, which align with the definition of AI Hazard (plausible future harm). No actual harm or incident is reported as having occurred yet, so it is not an AI Incident. The article is not primarily about responses or updates to past incidents, so it is not Complementary Information. It is clearly related to AI and its societal implications, so it is not Unrelated.
Thumbnail Image

Η Anthropic αρνείται να συμμορφωθεί με τις απαιτήσεις του Πενταγώνου για όπλα ΑΙ | in.gr

2026-02-25
in.gr
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (Anthropic's AI models) and their potential use in autonomous weapons and military operations, which are known to pose significant risks of harm (injury, violations of rights, disruption). Although no actual harm or incident has been reported, the conflict centers on the plausible future use of AI in ways that could lead to serious harm. The refusal of Anthropic to comply with Pentagon demands and the Pentagon's threat to enforce compliance highlight the credible risk of AI-enabled autonomous weapons deployment. This fits the definition of an AI Hazard, as the event plausibly could lead to an AI Incident involving harm. There is no indication that harm has already occurred, so it is not an AI Incident. The article is not merely complementary information or unrelated, as it focuses on the conflict and potential risks of AI use in military autonomous weapons.
Thumbnail Image

"Πόλεμος" Anthropic-Πενταγώνου για τη χρήση της AI σε στρατιωτικές εφαρμογές

2026-02-25
www.topontiki.gr
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Anthropic's AI models) and its use in military applications, which is explicitly discussed. The conflict centers on the potential deployment of AI in dangerous military missions without human control, which could plausibly lead to harms such as injury, violations of human rights, or other serious consequences. No actual harm is reported yet, but the credible risk of harm from unrestricted military AI use is evident. The article does not describe a realized incident but highlights a significant potential hazard related to AI use in defense. Thus, the classification as an AI Hazard is appropriate.
Thumbnail Image

Ποιο είναι το τελεσίγραφο του Πενταγώνου στην Anthropic για το Claude AI;

2026-02-25
Techgear.gr
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Claude) used in military operations, with direct links to harm (military operation leading to capture of a political figure). The Pentagon's demand to remove safety features and the threat to invoke the Defense Production Act highlight the AI system's role in causing or enabling harm. The conflict over ethical constraints and the operational use of AI in lethal or surveillance contexts further supports classification as an AI Incident. The presence of realized harm and the AI system's pivotal role in the event meet the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Γιατί Anthropic και ΗΠΑ συγκρούονται για στρατιωτικό συμβόλαιο AI

2026-02-25
euronews
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Anthropic's AI chatbot Claude) and its development and use in a military context. The conflict centers on ethical concerns and government pressure to allow unrestricted military use, which could plausibly lead to harms such as autonomous weapons deployment or mass surveillance. However, no actual harm or incident has occurred yet. The event thus fits the definition of an AI Hazard, as it describes a credible risk scenario stemming from the AI system's use and development. The article does not report realized harm or an incident, nor is it primarily about a response to a past incident, so AI Hazard is the appropriate classification.
Thumbnail Image

Ποιο είναι το τελεσίγραφο του Πενταγώνου στην Anthropic για το Claude AI;

2026-02-25
news.makedonias.gr
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Claude AI) and its use in military contexts. The Pentagon's demand to remove safety restrictions to enable use in autonomous weapons and surveillance directly relates to potential violations of human rights and harm to communities. Since the company currently refuses and no harm has yet occurred, the event is a credible risk scenario rather than a realized incident. Hence, it fits the definition of an AI Hazard, reflecting plausible future harm from the AI system's use if the restrictions are removed.
Thumbnail Image

Πόλεμος "εξουσίας" πίσω από την σύγκρουση για τον έλεγχο της Τεχνητής Νοημοσύνης που ακουμπάει ήδη την ζωή μας

2026-02-27
NewsIT
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Anthropic's Claude model) and concerns its use and control, which is central to the conflict. The article does not report actual harm occurring yet but discusses credible risks of harm from misuse, including autonomous lethal weapons and mass surveillance, which are serious potential harms under the AI harms framework. The conflict and threats of forced technology seizure indicate a high-risk scenario where the AI system's use could plausibly lead to significant harms. Since no realized harm is described, the event is best classified as an AI Hazard rather than an AI Incident. The article also discusses broader governance and control issues but does not focus primarily on responses or updates, so it is not Complementary Information. It is clearly related to AI systems and their societal impact, so it is not Unrelated.
Thumbnail Image

ΗΠΑ: Ο Τραμπ διέταξε τις κυβερνητικές υπηρεσίες να σταματήσουν τη χρήση AI της εταιρείας Anthropic - iefimerida.gr

2026-02-27
iefimerida.gr
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (Anthropic's AI technology) and a governmental order to cease its use, citing risks to lives and national security if access is restricted. However, no actual harm or incident caused by the AI system is reported. The focus is on the potential risk or hazard posed by limited access to the AI system for military purposes. This fits the definition of an AI Hazard, where the AI system's development or use could plausibly lead to harm, but no harm has yet occurred. It is not complementary information because the main focus is not on responses or updates to a past incident, nor is it unrelated as it clearly involves AI and potential harm.
Thumbnail Image

Ρήξη ΗΠΑ με AI: Η Anthropic αρνείται στο Πεντάγωνο χρήση AI για πόλεμο και παρακολούθηση - iefimerida.gr

2026-02-27
iefimerida.gr
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Anthropic's Claude) and discusses its potential use in mass surveillance and autonomous weapons, both of which could lead to serious harms such as violations of human rights and democratic principles. Although no direct harm has yet occurred, the dispute and contract negotiations reveal a credible risk that the AI system could be used in harmful ways. The company's stance to prevent such uses and the government's insistence on broad usage rights indicate a plausible future harm scenario. Since no actual harm has been reported, this event fits the definition of an AI Hazard rather than an AI Incident. It is not Complementary Information because the main focus is on the potential misuse and contractual conflict, not on updates or responses to past incidents.
Thumbnail Image

Ο Τραμπ διέταξε την κυβέρνησή του να σταματήσει αμέσως τη χρήση της ΤΝ της εταιρίας Anthropic

2026-02-27
ΣΚΑΪ
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (Anthropic's AI technology) and discusses its use by government agencies, specifically the military. However, there is no indication that the AI system has caused any direct or indirect harm, injury, or violation of rights. The event is about stopping the use of the AI system due to concerns about national security and access control, which implies a potential risk but no actual incident. Therefore, this is best classified as Complementary Information, as it provides context on governance and policy decisions related to AI use, without describing a new AI Incident or AI Hazard.
Thumbnail Image

Άμεση εντολή Τραμπ στις υπηρεσίες των ΗΠΑ να σταματήσουν τη χρήση της ΤΝ της εταιρίας Anthropic

2026-02-27
Newsbeast.gr
Why's our monitor labelling this an incident or hazard?
The article does not report any realized harm or plausible future harm caused by the AI system. It focuses on a political decision to cease use of a specific AI technology due to access restrictions, which is a governance or policy response. There is no indication of injury, rights violations, disruption, or other harms linked to the AI system's use or malfunction. Therefore, this is best classified as Complementary Information, as it provides context on governance and policy responses related to AI use.
Thumbnail Image

Τραμπ: Εντολή να σταματήσει αμέσως η χρήση του Claude της Anthropic από την κυβέρνηση

2026-02-27
newsbomb.gr
Why's our monitor labelling this an incident or hazard?
The article does not report any actual harm caused by the AI system, nor does it describe a specific malfunction or misuse event. Instead, it reports a political directive to stop using an AI system due to concerns about potential risks to national security and constitutional compliance. This fits the definition of Complementary Information, as it is a governance response to AI-related concerns and provides context on societal and political reactions to AI deployment, without describing a concrete AI Incident or AI Hazard.
Thumbnail Image

Ο Τραμπ διέταξε την κυβέρνησή του να "σταματήσει αμέσως τη χρήση" ΑΙ της εταιρείας Anthropic

2026-02-27
zougla.gr
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (Anthropic's AI technology) and its use by government agencies, but there is no indication that the AI system has caused any injury, rights violations, disruption, or other harms. The event is about a political directive to cease use of the AI system, reflecting a governance or policy response rather than an incident or hazard. There is no credible or specific indication that the AI system's use or malfunction could plausibly lead to harm in the near future as described. Therefore, this is best classified as Complementary Information, as it provides context on governance and policy decisions related to AI use.
Thumbnail Image

Γιατί η Anthropic και οι ΗΠΑ συγκρούονται για στρατιωτικό συμβόλαιο Πηγή: Euronews

2026-02-27
Investing.com Ελληνικά
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Anthropic's Claude) and discusses its use and potential misuse in military contexts. The conflict centers on the refusal to allow unrestricted military use due to ethical and safety concerns, highlighting plausible future harms such as autonomous weapons deployment and mass surveillance. No actual harm or incident is reported yet, but the credible risk of significant harm is present. Thus, the event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because the AI system and its potential impacts are central to the event.
Thumbnail Image

"Ψεύτες με σύμπλεγμα Θεού": Οργή Πενταγώνου κατά Anthropic για τα όρια της στρατιωτικής ΑΙ | in.gr

2026-02-27
in.gr
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Anthropic's Claude chatbot) and its potential military use. The dispute arises from the AI system's intended use and the ethical concerns about its deployment in lethal autonomous weapons and surveillance, which could plausibly lead to significant harms such as violations of human rights or harm to communities. However, no actual harm or incident has occurred yet; the article discusses threats, negotiations, and potential consequences. Thus, the event fits the definition of an AI Hazard, as it plausibly could lead to an AI Incident if the AI system is used in ways that cause harm. It is not Complementary Information because the main focus is not on updates or responses to a past incident but on an ongoing dispute with potential future harm. It is not Unrelated because the AI system and its military use are central to the event.
Thumbnail Image

Ο Τραμπ έδωσε εντολή στις ομοσπονδιακές υπηρεσίες να σταματήσουν τη συνεργασία με την Anthropic | in.gr

2026-02-27
in.gr
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Anthropic's Claude) used by the U.S. Department of Defense, indicating AI system involvement. However, the event centers on a policy dispute and a government order to stop using the AI system, with no reported harm or malfunction caused by the AI. The disagreement concerns usage restrictions and control over the AI system, which is a governance and operational issue rather than an incident or hazard causing or plausibly leading to harm. Since no harm has occurred and the article mainly reports on the conflict and its implications, it fits the definition of Complementary Information, providing context and updates on AI governance and deployment challenges.
Thumbnail Image

Ο Τραμπ διατάσσει την κυβέρνησή του να "σταματήσει αμέσως τη χρήση" της ΑΙ της εταιρίας Anthropic

2026-02-27
CNN.gr
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Anthropic's AI technology) and a governmental directive to stop its use due to concerns about national security and potential risks to lives. However, there is no description of actual harm or incident caused by the AI system. The focus is on a policy decision and a warning about potential risks, not on an AI Incident or a specific AI Hazard event. This fits the definition of Complementary Information, as it details a governance response to AI-related concerns without reporting a new incident or hazard.
Thumbnail Image

Anthropic: "Πόλεμος" για τον έλεγχο της Τεχνητής Νοημοσύνης

2026-02-27
newsbreak
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (Claude) and its development and use, specifically in military contexts. While there is a clear ethical and governance conflict and potential for harm if the AI is used for mass surveillance or autonomous weapons, no actual harm or incident has been reported. The event centers on the plausible future risk and ethical stance against certain uses of AI technology, making it an AI Hazard rather than an AI Incident. It is not merely complementary information because the main focus is on the potential misuse and the company's refusal to comply, which implies a credible risk of harm if the AI were used as demanded.
Thumbnail Image

Ο Τραμπ απαγορεύει την χρήση της Anthropic από την κυβέρνηση

2026-02-27
newsbreak
Why's our monitor labelling this an incident or hazard?
The article involves AI systems (Anthropic's AI technology) and their use in defense, which is a sensitive and potentially high-risk domain. However, the event is about a political directive banning the use of this AI technology by government agencies due to ethical and policy disagreements, not about an AI incident causing harm or a hazard with plausible future harm. The focus is on governance and policy decisions, making it Complementary Information rather than an Incident or Hazard.
Thumbnail Image

Αδιανόητο - Οι ΗΠΑ θέλουν στρατό με AI δολοφόνους - Τελεσίγραφο σε OpenAI, Google, Anthropic: Ή τα φτιάχνετε ή σας τελειώνουμε

2026-02-26
bankingnews.gr
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (Claude neural network) in military operations, which directly implicates AI in potential harm to human life and rights. The Pentagon's use of AI without consent and the removal of ethical constraints increases the risk of AI-driven harm. The article describes realized use of AI in lethal operations and threats to human rights and accountability, fulfilling the criteria for an AI Incident. The presence of AI systems in autonomous or semi-autonomous weapons and their deployment in conflict zones is a clear case of AI-related harm or risk thereof. Hence, it is not merely a hazard or complementary information but an AI Incident.
Thumbnail Image

Το αμερικανικό Πεντάγωνο προσλαμβάνει "Δολοφόνο Ρομπότ": Η τεχνητή νοημοσύνη ορίζει πλέον τους όρους του πολέμου

2026-02-27
bankingnews.gr
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system developed by Anthropic in a military operation by the U.S. Pentagon, which is a direct use of AI in a context that can cause harm (warfare). The conflict over ethical constraints and the Pentagon's demand for AI without such limits indicates a risk of misuse or harmful deployment. The discussion of AI's role in autonomous weapons and lethal decisions aligns with potential violations of human rights and harm to communities. Although no specific incident of harm is detailed, the use of AI in military operations with lethal potential and the ethical controversy constitute a credible risk of harm. Given the direct involvement of AI in military operations and the serious ethical and safety concerns raised, this qualifies as an AI Incident due to the realized use of AI in warfare and the associated harms and risks.
Thumbnail Image

"Κόκκινες γραμμές" Anthropic στο Πεντάγωνο: Όχι σε ανεξέλεγκτη χρήση AI

2026-02-27
Sigma Live
Why's our monitor labelling this an incident or hazard?
The article involves AI systems and their potential military applications, which could plausibly lead to significant harms such as violations of human rights (mass surveillance) or harm from autonomous weapons. However, no actual harm or incident has occurred or been reported. The focus is on the ethical and governance conflict and the company's refusal to allow unrestricted use, which is a governance and ethical issue rather than a realized incident or hazard. Therefore, this is best classified as Complementary Information, as it provides important context on societal and governance responses to AI use in military contexts without describing a specific AI Incident or AI Hazard.
Thumbnail Image

ΗΠΑ: Ο Τραμπ διατάσσει την κυβέρνησή του να "σταματήσει αμέσως τη χρήση" της ΤΝ της εταιρίας Anthropic - Real.gr

2026-02-27
Real.gr
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Anthropic's AI) and discusses its use by the U.S. government, specifically the military. The decision to stop using the AI is based on concerns that its use could endanger lives and national security, indicating a plausible risk of harm. No actual harm or incident is reported, so it does not meet the criteria for an AI Incident. The event is not merely complementary information since the main focus is on the potential risk and the government's preventive response. Therefore, it fits the definition of an AI Hazard.
Thumbnail Image

ΗΠΑ: Ο Τραμπ διατάσσει την κυβέρνησή του να "σταματήσει αμέσως τη χρήση" της ΤΝ της εταιρίας Anthropic

2026-02-27
parapolitika.gr
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Anthropic's AI technology) and its use by U.S. federal agencies, including the Department of Defense. The directive to stop using the AI is motivated by concerns that the AI's use could endanger American lives and national security, implying a credible risk of harm. However, there is no indication that harm has already occurred or that the AI malfunctioned or was misused. The event is a governmental response to a perceived threat, indicating a plausible future harm scenario. Thus, it qualifies as an AI Hazard under the framework, as it concerns the plausible risk of harm from the AI system's use, but no realized harm is reported.
Thumbnail Image

Ο Τραμπ διατάζει όλες τις ομοσπονδιακές υπηρεσίες να καταργήσουν σταδιακά τη χρήση της τεχνολογίας Anthropic

2026-02-27
www.topontiki.gr
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Anthropic's AI technology) and its use in federal and military contexts. The disagreement arises from concerns about the AI's potential use in lethal autonomous weapons and mass surveillance, which could lead to serious harms including violations of human rights and threats to national security. While no actual harm has been reported yet, the situation plausibly could lead to an AI Incident if the AI were used in ways that cause injury, rights violations, or other significant harms. The President's directive to phase out the technology reflects recognition of these risks. Hence, this is best classified as an AI Hazard, as the harm is potential and the event centers on the plausible future risks of AI misuse in sensitive applications.
Thumbnail Image

Ανυποχώρητη η Anthropic στις πιέσεις του Πενταγώνου για τη χρήση ΤΝ | Protagon.gr

2026-02-27
Protagon.gr
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Anthropic's Claude model) and discusses its potential use in mass surveillance and autonomous weapons, which are applications that could plausibly lead to harms such as violations of human rights and physical harm. No actual harm or incident is reported; rather, the company is resisting pressure to allow such uses. This fits the definition of an AI Hazard, where the development or use of AI systems could plausibly lead to harm. The article does not describe a realized harm or incident, so it is not an AI Incident. It is also not merely complementary information or unrelated, as the focus is on the potential harmful uses of AI and the ethical stance of the company.
Thumbnail Image

Απασφάλισε κατά της Anthropic o Τραμπ: "Διακόπτεται κάθε χρήση από τις ομοσπονδιακές υπηρεσίες"

2026-02-27
insider.gr
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems developed by Anthropic and concerns their use by federal agencies, including the military. The decision to halt usage is based on concerns about the company's control over the AI tools and potential risks to national security and lives, implying plausible future harm. However, there is no report of actual injury, disruption, or rights violations caused by the AI systems so far. The announcement is a preventive measure addressing potential risks, fitting the definition of an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because it directly concerns AI system use and associated risks.
Thumbnail Image

rizospastis.gr - Συμφωνία της xAI με το υπουργείο Πολέμου των ΗΠΑ, υποχώρηση της Anthropic στις "δεσμεύσεις" της περί προστασίας από τους κινδύνους

2026-02-28
ΡΙΖΟΣΠΑΣΤΗΣ
Why's our monitor labelling this an incident or hazard?
The article explicitly references AI systems being developed and used for military purposes, including autonomous drones and surveillance, which could plausibly lead to harms such as violations of human rights, harm to communities, and escalation of conflict. Although no specific harm has yet occurred, the described developments and policy shifts increase the risk of AI-related incidents in the future. The discussion about the removal of safety constraints and the race for AI superiority in military contexts supports classification as an AI Hazard rather than an AI Incident. The article does not report a realized harm but warns of credible future risks, fitting the definition of an AI Hazard.
Thumbnail Image

Ρήξη Anthropic - Πενταγώνου για τη στρατιωτική χρήση AI: "Δεν αλλάζουμε στάση"

2026-02-27
SofokleousIn.GR
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (Anthropic's AI models) and their potential military use, which is a recognized area of risk. The conflict and threats from the Pentagon highlight the possibility that these AI systems could be used in ways that might lead to harm, such as in autonomous weapons or other military applications. However, the article does not report any actual harm or incident caused by the AI systems so far. The focus is on the negotiation and ethical stance, indicating a credible risk of future harm but no realized harm. Thus, the event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Ο Τραμπ ζητά άμεσο τερματισμό χρήσης της ΤΝ της Anthropic από την κυβέρνηση

2026-02-27
Business Daily
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Anthropic's AI technology) and its use by government agencies, but there is no indication that the AI system has caused any injury, rights violation, disruption, or other harm. The decision to stop using the AI system is a policy response to a disagreement over access, not a reaction to an incident or hazard caused by the AI system. Therefore, this is best classified as Complementary Information, as it provides context on governance and societal responses related to AI use, without describing an AI Incident or AI Hazard.
Thumbnail Image

Ηθικό "μπλόκο" της Anthropic στο Πεντάγωνο: Όχι στην ανεξέλεγκτη στρατιωτική χρήση της ΤΝ

2026-02-27
The PressRoom
Why's our monitor labelling this an incident or hazard?
The article involves AI systems explicitly, as Anthropic develops AI technology. The event centers on the use and potential misuse of AI systems for military purposes, which could plausibly lead to harms such as violations of human rights or harm to communities if fully autonomous weapons or mass surveillance are deployed. However, no actual harm or incident has occurred yet; the company is resisting demands to allow such uses. Therefore, this situation constitutes an AI Hazard, reflecting a credible risk of future harm from the military use of AI systems if unrestricted access is granted.
Thumbnail Image

Anthropic: Ρήξη με το αμερικανικό Υπουργείο Πολέμου για τη χρήση της Τεχνητής Νοημοσύνης σε αυτόνομα όπλα και μαζική επιτήρηση πολιτών - Αγώνας της Κρήτης

2026-02-27
Αγώνας της Κρήτης
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Anthropic's Claude) and discusses its development and use in sensitive areas like autonomous weapons and mass surveillance. The company refuses to allow its AI to be used in ways that could cause harm, such as autonomous lethal decisions or mass surveillance infringing on democratic values and rights. Although no direct harm has occurred yet, the potential for significant harm is clearly articulated, making this a credible AI Hazard. The article does not report any realized harm or incident, nor is it primarily about responses or updates to past incidents, so it is not Complementary Information. It is also not unrelated, as AI systems and their ethical use are central to the event.
Thumbnail Image

Η Anthropic αρνείται τους νέους όρους του Πενταγώνου για αυτόνομα όπλα

2026-02-27
SecNews.gr
Why's our monitor labelling this an incident or hazard?
The article involves AI systems (Anthropic's Claude AI models) and their potential use in autonomous weapons and surveillance, which are areas with significant ethical and safety concerns. However, no direct or indirect harm has occurred, nor is there a described plausible imminent risk of harm resulting from AI system malfunction or misuse. The focus is on the company's principled refusal and the negotiation process, which informs the broader AI ecosystem and governance landscape. This aligns with the definition of Complementary Information, as it updates on societal and governance responses to AI-related ethical challenges without reporting an AI Incident or AI Hazard.
Thumbnail Image

Απασφάλισε κατά της Anthropic o Τραμπ: "Διακόπτεται κάθε χρήση από τις ομοσπονδιακές υπηρεσίες"

2026-02-27
sofokleous10.gr
Why's our monitor labelling this an incident or hazard?
The article focuses on a political and administrative decision to discontinue the use of an AI system by federal agencies due to concerns about control and national security. There is no indication that the AI system caused any direct or indirect harm, nor that a plausible harm event occurred. The event is about a policy decision and a public statement by a political figure, which fits the definition of Complementary Information as it provides context and governance response to AI use rather than describing an AI Incident or AI Hazard.
Thumbnail Image

Ρήξη μεταξύ Ουάσιγκτον και Anthropic για στρατιωτική χρήση του Claude

2026-02-27
Ηλεκτρονική Πύλη ikypros
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Claude) and discusses its development and use in military contexts. The disagreement centers on the potential for harmful uses (e.g., autonomous weapons, mass surveillance) that Anthropic seeks to restrict. The U.S. government threatens to terminate contracts and label Anthropic a supply chain risk if it does not comply. However, there is no indication that any harm has yet occurred due to the AI system's deployment or malfunction. The focus is on the plausible future harm that could arise from unrestricted military use of the AI technology. Thus, the event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Πεντάγωνο εναντίον Αnthropic: Η μάχη για το "μυαλό" του σύγχρονου πολέμου - iAxia

2026-02-27
iAxia
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Anthropic's Claude) and discusses its use in military applications. The Pentagon's demand to remove ethical guardrails to enable fully autonomous lethal decisions directly relates to the AI system's use and potential malfunction or misuse. While no actual harm has occurred yet, the plausible future harm includes lethal autonomous weapons operating without human oversight, which could cause injury or death and violate human rights. The article also discusses strategic risks of dependency on foreign AI systems for national defense, emphasizing the potential for significant harm. Since the harm is not yet realized but is a credible and serious risk, the event is best classified as an AI Hazard rather than an AI Incident.
Thumbnail Image

Εργαζόμενοι σε OpenAI και Google αντιδρούν στη στρατιωτική χρήση της AI - Fibernews

2026-02-27
Fibernews - All digital news!
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (advanced AI models from OpenAI, Google, Anthropic) and their potential military use. The concerns raised by employees and analysts focus on the plausible future harms from AI-enabled autonomous weapons and mass surveillance, which align with definitions of AI Hazards. No actual harm or incident is reported yet, but the credible risk of harm is central to the event. Hence, it is classified as an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

Αντιδράσεις σε Google και OpenAI: Εργαζόμενοι ζητούν "φρένο" στη χρήση της AI από το Πεντάγωνο χωρίς εγγυήσεις - iAxia

2026-02-27
iAxia
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems developed by OpenAI, Google, and Anthropic, and discusses their potential use by the Pentagon in autonomous weapons and mass surveillance. While no actual harm has been reported yet, the threat of forced deployment and adaptation of AI models for lethal autonomous systems without human oversight presents a credible risk of serious harm, including violations of human rights and physical harm. The employees' protest and the discussion of government pressure highlight the plausible future harm scenario. Since harm is not yet realized but is plausible and credible, this event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Anthropic: Προειδοποίηση για φρένο σε όλες τις κρατικές συμφωνίες αν δεν συμφωνήσει με το Πεντάγωνο - Fibernews

2026-02-27
Fibernews - All digital news!
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Anthropic's Claude chatbot) and its use by the military. The dispute centers on the refusal to allow unrestricted military use due to ethical concerns about lethal autonomous weapons and surveillance, which are recognized potential harms. No actual harm or incident has occurred yet, but the government's threat to compel use or cut off contracts highlights a credible risk of future harm. Thus, the event fits the definition of an AI Hazard, as it plausibly could lead to AI incidents involving harm to people or violation of rights if the AI is used in lethal or surveillance applications. There is no indication of realized harm or incident at this stage, so it is not an AI Incident. It is more than complementary information because it concerns a credible risk of harm, and it is not unrelated as it directly involves AI system use and potential harm.
Thumbnail Image

앤트로픽도 "딥시크 등 中기업, AI모델 무단추출 적발" | 연합뉴스

2026-02-23
연합뉴스
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (Anthropic's Claude and Chinese AI companies' models) and the use of AI techniques (distillation) to extract and replicate AI model capabilities without authorization. This unauthorized extraction constitutes a breach of intellectual property rights and terms of use, which falls under violations of applicable law protecting intellectual property rights. Additionally, the removal of safety features in the extracted models poses a risk to security, which is a form of harm. Since the unauthorized extraction and use have already occurred, this is a realized harm, not just a potential risk. Hence, this qualifies as an AI Incident.
Thumbnail Image

앤스로픽 "中 AI 기업들, 클로드 결과물 무단 추출"

2026-02-23
아시아경제
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (Claude AI model and distillation AI technique). The unauthorized extraction of AI outputs by competitors directly breaches intellectual property rights, which is a recognized harm under the framework. The harm has already materialized as the data collection and use have taken place. Therefore, this qualifies as an AI Incident due to violation of intellectual property rights caused by the use of AI systems.
Thumbnail Image

앤트로픽도 "딥시크 등 中기업, AI모델 무단추출 적발" - 전파신문

2026-02-23
jeonpa.co.kr
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems, specifically Anthropic's 'Claude' model and Chinese companies using AI techniques to extract its outputs illicitly. The use of distillation to replicate AI capabilities without authorization is a breach of intellectual property rights, a recognized harm under the framework. Furthermore, the removal of safety features in the extracted models poses a plausible risk to national security, which is a significant harm. Since the harm (intellectual property violation) has occurred and security risks are present, this event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

트럼프 AI칩 대중 수출 허용 속...앤트로픽 "딥시크 등 中기업이 AI모델 도용" | 아주경제

2026-02-24
아주경제
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems (Anthropic's Claude model) and the unauthorized use of their outputs by Chinese AI companies to create competing models, which is a direct violation of intellectual property rights. This constitutes harm under the AI Incident category (violation of intellectual property rights). The event also involves the use and development of AI systems and their outputs, with direct consequences. Although there is discussion about potential future risks related to AI chip exports, the primary focus is on realized harm through model theft and misuse. Hence, the classification as an AI Incident is appropriate.
Thumbnail Image

앤스로픽 1600만 건 데이터 도용 적발 "중국발 안보 위협" By Economic Review

2026-02-24
Investing.com 한국어
Why's our monitor labelling this an incident or hazard?
The event involves the use and misuse of AI systems (AI model outputs and distillation techniques) leading to the theft of proprietary AI technology, which is a clear violation of intellectual property rights (harm category c). The large-scale unauthorized data extraction and model theft have already taken place, constituting realized harm. Additionally, the potential for serious security threats further supports the classification as an AI Incident. The AI system's development and use are directly implicated in the harm, fulfilling the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

美 "中, 가짜계정으로 앤트로픽 AI 대량 추출"...'블랙웰' 밀반입 의혹도

2026-02-24
쿠키뉴스
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the use of AI systems (Anthropic's Claude model) and the unauthorized extraction of its outputs by Chinese companies using AI techniques (fake accounts, distillation) to train their own AI models, which is a direct violation of intellectual property rights. The potential loss of safety controls in the stolen AI model outputs also poses a risk of harm related to misuse in dangerous applications, which is a direct or indirect harm. Furthermore, the use of restricted Nvidia AI chips in China despite export bans is a national security concern linked to AI hardware enabling advanced AI capabilities. These factors combined demonstrate direct and indirect harms caused by AI system misuse and unauthorized use, fitting the definition of an AI Incident.
Thumbnail Image

딥시크, 美 AI 답변 베껴 만들었나...앤스로픽 "무단 추출" 주장

2026-02-24
서울경제
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (large language models like Claude, ChatGPT, and R1) and describes the unauthorized extraction and use of AI training data and outputs by Chinese companies. This misuse directly breaches intellectual property rights, which is a recognized harm under the AI Incident definition (violation of intellectual property rights). Additionally, the removal of safety features and the national security concerns further underline the seriousness of the harm. Therefore, this event qualifies as an AI Incident due to the realized harm caused by the misuse of AI systems and their outputs.
Thumbnail Image

Anthropic acuză companiile chineze că fură tot ce prin în domeniul AI. Peste 16 milioane de interacțiuni suspecte

2026-02-26
Evenimentul Zilei
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (chatbots and AI models) and describes a misuse scenario where repeated querying is used to extract model behavior to train competing AI systems without authorization. This constitutes a breach of intellectual property rights, a recognized form of harm under the AI Incident definition. Additionally, Anthropic highlights potential cybersecurity risks and safety issues arising from unauthorized models lacking original safety mechanisms. These factors confirm direct or indirect harm linked to AI system misuse, fulfilling criteria for an AI Incident rather than a mere hazard or complementary information. The event is not unrelated as it centrally concerns AI system misuse and its consequences.
Thumbnail Image

Companii americane acuză firme chineze că au copiat modele AI. Cum funcționează atacurile "model extraction" și "distillation"

2026-02-26
Mediafax.ro
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (AI models like Claude) and their unauthorized use through model extraction and distillation techniques. The harm is a violation of intellectual property rights (a breach of obligations under applicable law protecting intellectual property), which is a direct consequence of the AI system's use. The article details how the AI models were repeatedly queried to create unauthorized copies, constituting misuse of AI systems leading to harm. Although no physical injury or direct cybersecurity incident is reported, the intellectual property violation and potential security risks meet the criteria for an AI Incident. Hence, the classification as AI Incident is appropriate.
Thumbnail Image

Anthropic acuză DeepSeek și alte companii chineze că au folosit Claude pentru a-și antrena modelele AI

2026-02-24
PLAYTECH.ro
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Claude) and describes its use in a manner that allegedly violates intellectual property rights, which is a breach of obligations under applicable law protecting intellectual property rights, thus constituting harm under the AI Incident definition. The large-scale unauthorized use and the potential deployment of less safe distilled models in critical domains further underline the seriousness of the incident. Although some harms are prospective (e.g., risks of unsafe models in military or surveillance), the violation of rights and large-scale unauthorized use have already occurred, making this an AI Incident rather than merely a hazard or complementary information.
Thumbnail Image

Anthropic acuză că DeepSeek și alte firme chineze de inteligență artificială au folosit ilegal modelul său, Claude, pentru a-și antrena propriile modele - Aktual24

2026-02-24
Aktual24
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (Claude and the rival AI models) and describes the use of AI outputs to train other AI models without authorization, which constitutes a violation of intellectual property rights. This is a direct or indirect harm caused by the development and use of AI systems. Since the misuse has already occurred and is described as ongoing, it qualifies as an AI Incident under the category of violations of human rights or breach of obligations under applicable law, specifically intellectual property rights.
Thumbnail Image

"Războiul Rece" al Inteligenței Artificiale? Companiile americane acuză firmele chineze că fură miliarde din cercetarea AI

2026-02-26
euronews.ro: Știri de ultimă oră, breaking news, #AllViews
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (chatbots and model training) and describes a misuse scenario where AI technology is copied without authorization through model extraction attacks. This constitutes a breach of intellectual property rights, which is a recognized harm under the AI Incident definition. However, the article does not report that this misuse has yet caused direct or indirect realized harm such as legal rulings, operational disruptions, or health/safety impacts. Instead, it focuses on the potential risks, including national security concerns and the circumvention of safety filters, which could plausibly lead to significant harms in the future. Given the absence of confirmed realized harm but clear credible risk, the event fits the definition of an AI Hazard rather than an AI Incident. It is not Complementary Information because it reports a new primary concern rather than updates or responses to past incidents. It is not Unrelated because AI systems and their misuse are central to the event.
Thumbnail Image

Giganții americani de Inteligență Artificială acuză China de furt

2026-02-27
Descopera.ro
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (advanced AI models and their replication via distillation). The event stems from the use and development of AI systems (model extraction). While there is a clear violation of intellectual property rights (a form of harm under the framework), the article does not report that this violation has directly or indirectly caused realized harm to persons, communities, or property. Instead, it focuses on accusations, detection efforts, and potential risks. The mention of possible increased cybersecurity risks and misuse is speculative and future-oriented, not describing an actual incident or imminent hazard. Thus, the event is best categorized as Complementary Information, providing important context on AI ecosystem challenges and governance issues without reporting a concrete AI Incident or AI Hazard.
Thumbnail Image

Кинески компании обвинети за копирање на американски модели за вештачка интелигенција

2026-02-24
meta.mk
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (chatbot models) and describes their misuse through coordinated attacks to extract proprietary capabilities, which is a violation of intellectual property rights. This harm has already occurred as the attacks were realized and involved millions of interactions. Therefore, this qualifies as an AI Incident due to the direct involvement of AI systems in causing a breach of intellectual property rights.
Thumbnail Image

Anthropic ги обвинува кинеските компании за кражба - USB.mk

2026-02-26
USB.mk
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (Claude and the accused companies' AI models) and describes the misuse of AI through unauthorized data generation and model distillation to replicate proprietary AI capabilities. This misuse directly leads to a violation of intellectual property rights, which is a recognized harm under the AI Incident definition. Therefore, this event qualifies as an AI Incident due to the realized harm of IP rights violation through AI misuse.
Thumbnail Image

Anthropic ги обвинува кинеските компании за кражба - M Express

2026-02-26
M Express
Why's our monitor labelling this an incident or hazard?
The event involves AI systems as the companies are AI-focused and the creation of fake accounts suggests automated or AI-driven manipulation. The accusation implies misuse of AI systems to deceive or manipulate data, which can be considered a violation of legal or ethical norms, potentially breaching intellectual property or other rights. Since the event describes an actual misuse leading to harm (fraudulent behavior impacting trust and possibly market fairness), it qualifies as an AI Incident.
Thumbnail Image

Anthropic го отфрла ултиматумот на Пентагон: "Со мирна совест не можеме да попуштиме" - Локално

2026-02-27
Локално
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Anthropic's Claude) and discusses its potential use by the Pentagon for mass surveillance and fully autonomous weapons, both of which are associated with serious harms (human rights violations, harm to communities). Although no actual harm has occurred yet, the demand to remove protective measures and the threat to force compliance create a credible risk that the AI system could be used in harmful ways. Thus, this is an AI Hazard rather than an Incident, as the harm is plausible but not realized. The article focuses on the ethical and governance conflict rather than reporting an actual incident of harm, so it is not Complementary Information or Unrelated.
Thumbnail Image

Ескалира судирот меѓу Пентагон и AI гигантите: Трамп со жестока реакција - Trn.mk

2026-02-28
Trn.mk
Why's our monitor labelling this an incident or hazard?
The article involves AI systems (OpenAI and Anthropic's models) and their potential military use, which is a significant governance and ethical issue. However, no direct or indirect harm has occurred yet, nor is there a clear plausible immediate risk of harm described. The focus is on the companies' ethical stances, government pressure, and political reactions, which are societal and governance responses to AI development and deployment. This fits the definition of Complementary Information, as it enhances understanding of AI ecosystem dynamics without reporting a new AI Incident or AI Hazard.
Thumbnail Image

Трамп го забрани Anthropic: Договор меѓу Пентагон и OpenAI - Trn.mk

2026-02-28
Trn.mk
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (AI tools from Anthropic and OpenAI) and concerns their use in sensitive military and surveillance contexts. The ban and contract reflect concerns about potential misuse and risks, indicating plausible future harm if AI is used in autonomous weapons or mass surveillance without proper controls. However, no actual harm or incident has been reported yet. The article mainly discusses government decisions, company disputes, and policy implications, which aligns with Complementary Information as it provides context and updates on AI governance and societal responses rather than describing a specific AI Incident or Hazard.
Thumbnail Image

Трамп направи АИ пресврт: OpenAI напредува, Anthropic блокирана

2026-02-28
Рацин.мк
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (Anthropic's Claude and OpenAI's models) and their use or potential use in military contexts, including autonomous weapons and surveillance, which are high-risk applications. The Pentagon's labeling of Anthropic as a supply chain risk and the dispute over safety restrictions indicate concerns about plausible future harms from these AI systems. However, the article does not report any realized harm or incident caused by these AI systems but rather a policy and operational conflict about their deployment and safety controls. Thus, it fits the definition of an AI Hazard, where the AI systems' development and use could plausibly lead to significant harms, but no direct or indirect harm has yet materialized.
Thumbnail Image

Трамп нареди прекин со Anthropic - технолошките гиганти поставуваат етички граници за воената употреба на AI - Pari.com.mk

2026-02-28
Pari.com.mk
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems used or intended for military purposes, with ethical restrictions and government demands in conflict. However, it does not report any realized harm, injury, rights violation, or disruption caused by AI systems. Instead, it focuses on the ongoing dispute, ethical principles, and governance challenges, which are updates and context about AI ecosystem responses and policy debates. Thus, it fits the definition of Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

[社説]AIの行き過ぎた軍事利用に歯止めを

2026-03-17
日本経済新聞
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems developed by Anthropic and their use in military contexts, including autonomous weapons and surveillance, which are AI systems with potential for significant harm. Although no specific incident of harm is reported, the article highlights the plausible risk of harm from the military use of AI, such as ethical issues and lowered barriers to lethal attacks. Therefore, this event describes a credible AI Hazard related to the potential misuse and expansion of AI in military applications, rather than a realized AI Incident or mere complementary information.
Thumbnail Image

米政権、アンソロピックのブラックリスト掲載「正当」 法廷文書で主張

2026-03-18
ニューズウィーク日本版 オフィシャルサイト
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (Anthropic's AI technology) and concerns about its potential misuse in autonomous weapons and surveillance, which could plausibly lead to significant harms such as violations of human rights or national security risks. The government's blacklisting is a preventive measure based on these plausible risks. Since no actual harm has been reported, but there is a credible risk of harm due to the AI system's potential applications, this event qualifies as an AI Hazard. It is not Complementary Information because the main focus is on the legal dispute and risk assessment, not on updates or responses to a past incident. It is not an AI Incident because no harm has materialized yet.
Thumbnail Image

アンソロピックの弁護士、政府が顧客に競合企業への乗り換えを促す「圧力」をかけていると主張 | Business Insider Japan

2026-03-14
businessinsider.jp
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (Anthropic's Claude) and its development and use context, but the reported harm is economic and reputational damage to the company due to government pressure on customers, not harm caused by the AI system's malfunction, misuse, or outputs. There is no indication of injury, rights violations, infrastructure disruption, or environmental harm caused by the AI system. The government's designation and pressure are governance and legal issues impacting the AI ecosystem. This fits the definition of Complementary Information, as it informs about societal and governance responses to AI-related developments without describing a new AI Incident or AI Hazard.
Thumbnail Image

「世界経済まとめノート+深掘り」第75回:「テックライト(右派)」 MAGAよりも根深く浸透?

2026-03-17
愛媛新聞社
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (Anthropic's Claude) and their potential use in fully autonomous weapons and mass surveillance, which could plausibly lead to serious harms such as injury or violations of rights. Since no actual harm has occurred yet but there is a credible risk of future harm, this qualifies as an AI Hazard. The focus is on the potential misuse and governance conflicts rather than realized incidents, so it is not an AI Incident. It is more than general AI news or policy discussion, so it is not Complementary Information or Unrelated.
Thumbnail Image

トランプ政権がアンソロピックを排除した真意...OpenAIの軍事急接近と「AI倫理」

2026-03-18
ビジネスジャーナル/Business Journal | ビジネスの本音に迫る
Why's our monitor labelling this an incident or hazard?
The article involves AI systems and their use in military and government contexts, but it primarily reports on government policy actions, corporate positioning, and ethical conflicts rather than any realized harm or a specific event where AI caused or could plausibly cause harm. There is no description of an AI system malfunction, misuse causing harm, or a credible near-miss event. The focus is on the strategic and ethical implications of AI deployment and government procurement decisions, which fits the definition of Complementary Information as it provides context and updates on AI governance and industry dynamics without reporting a new AI Incident or AI Hazard.
Thumbnail Image

中国AI企業が他社製AIを「ただ乗り蒸留」か 米社が主張、安全保障リスクも

2026-03-18
日経クロステック(xTECH)
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (generative AI models) and their unauthorized use by Chinese companies to extract and replicate model capabilities, constituting a violation of intellectual property rights (a breach of obligations under applicable law). The misuse is large-scale and systematic, involving circumvention of access controls, which directly leads to harm recognized by the AI Incident definition. Additionally, the potential security risks elevate the seriousness of the incident. Since the harm is realized (not just potential), this is classified as an AI Incident rather than a hazard or complementary information.