US Military Uses Anthropic's Claude AI in Venezuela Attack; Chinese Firms Illegally Exploit Claude Model

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

The US military reportedly used Anthropic's Claude AI in a 2026 attack in Venezuela, raising concerns over AI's role in warfare and ethical guidelines. Separately, Anthropic revealed that Chinese AI firms illicitly used Claude via thousands of fake accounts to improve their own models, violating intellectual property rights.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article involves AI systems (Anthropic's AI) and their potential military use, which is a known area of significant risk. The disagreement and planned discussion indicate concerns about the ethical and safe use of AI in military contexts. Since no actual harm or incident has occurred or been reported, but the situation clearly involves plausible future harm related to AI military applications, this qualifies as an AI Hazard. It is not an AI Incident because no harm has materialized, nor is it Complementary Information or Unrelated.[AI generated]
AI principles
AccountabilityRespect of human rights

Industries
Government, security, and defence

Affected stakeholders
General publicBusiness

Harm types
Physical (injury)Economic/Property

Severity
AI hazard

Business function:
Research and development

AI system task:
Reasoning with knowledge structures/planningContent generation


Articles about this incident or hazard

Thumbnail Image

中国企業が米AI「クロード」不正利用 米アンソロピック発表、自社製品の改良狙いか

2026-02-24
産経ニュース
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the misuse of an AI system (Anthropic's Claude) by unauthorized actors to train their own AI models, which is a breach of intellectual property rights and legal frameworks protecting AI systems. The misuse is organized and extensive, indicating a direct link between the AI system's use and the harm caused. Although no physical harm or disruption is reported, the violation of rights and potential legal breaches qualify this as an AI Incident under the OECD framework.
Thumbnail Image

ヘグセス米国防長官、アンソロピックCEO呼び出し AI兵器利用で対立

2026-02-23
日本経済新聞
Why's our monitor labelling this an incident or hazard?
The article involves AI systems (Anthropic's AI) and their potential military use, which is a known area of significant risk. The disagreement and planned discussion indicate concerns about the ethical and safe use of AI in military contexts. Since no actual harm or incident has occurred or been reported, but the situation clearly involves plausible future harm related to AI military applications, this qualifies as an AI Hazard. It is not an AI Incident because no harm has materialized, nor is it Complementary Information or Unrelated.
Thumbnail Image

米IBM株13%急落 アンソロピックAIが「COBOL」の代替を加速

2026-02-23
日本経済新聞
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (Anthropic's AI for code generation) and its impact on the market perception of IBM's business. However, it does not describe any harm or plausible harm caused by the AI system's development, use, or malfunction. The stock price drop is a financial market reaction, not a harm caused by AI. The event provides context on AI's disruptive potential in legacy system modernization, which is relevant background information but not an incident or hazard. Hence, it fits the definition of Complementary Information.
Thumbnail Image

AIの軍事利用条件で協議 ヘグセス国防長官と米新興企業CEO、制限緩和めぐり対立

2026-02-23
産経ニュース
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Claude) and its use in military contexts, specifically concerning autonomous weapons and surveillance, which are areas with high potential for significant harm. However, the article describes negotiations and policy discussions about usage restrictions and safeguards rather than any realized harm or incident. There is no indication that the AI system has caused harm yet, but the potential for harm through autonomous weapons or mass surveillance is clearly plausible. Therefore, this event constitutes an AI Hazard, as it concerns plausible future harms related to the AI system's military use and the possible relaxation of safeguards.
Thumbnail Image

米国防長官、アンソロピックCEO呼び出しへ AIの軍事利用巡り協議か=報道

2026-02-23
ニューズウィーク日本版 オフィシャルサイト
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (Anthropic's Claude) and its potential military use, which is a significant context for AI hazards. However, the event is about discussions and possible disagreements over usage terms, with no actual harm or incident reported. Since no direct or indirect harm has occurred yet, and the focus is on potential future use and policy negotiation, this qualifies as Complementary Information providing context on governance and societal responses to AI military applications rather than an AI Incident or AI Hazard.
Thumbnail Image

中国AI企業、「クロード」不正利用しモデル改良 アンソロピックが警告

2026-02-23
ニューズウィーク日本版 オフィシャルサイト
Why's our monitor labelling this an incident or hazard?
An AI system (Anthropic's Claude) is explicitly involved, and its unauthorized use and modification by other AI companies have led to models without proper safety controls. This situation directly leads to significant potential harms, including risks to national security and public safety, which fits the definition of an AI Incident due to realized misuse and resulting harm. The article reports on actual misuse and the resulting risks, not just potential future harm or complementary information.
Thumbnail Image

米、AI軍事利用協議へ 制限緩和巡り対立

2026-02-23
神戸新聞
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Anthropic's Claude) and its potential military applications, including autonomous weapons and mass surveillance, which could plausibly lead to significant harms such as violations of human rights and harm to communities. However, the article only describes planned discussions and pressure to relax restrictions, with no actual deployment or misuse causing harm reported. Therefore, this situation represents a credible risk of future harm but not an incident of realized harm. Hence, it qualifies as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

中国企業が米AIを不正利用 自社製品の改良狙いか、クロード

2026-02-23
神戸新聞
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (Claude) and its outputs in a manner that violates the terms of use, constituting unauthorized or illicit use. This misuse is directly linked to the development and improvement of other AI products by the Chinese companies, which is a breach of intellectual property rights and possibly contractual or legal obligations. Since the misuse has already occurred and involves violation of rights through unauthorized use of AI outputs, it meets the criteria for an AI Incident under violations of intellectual property rights or breach of obligations under applicable law.
Thumbnail Image

中国企業が米AIを不正利用 | 中国新聞デジタル

2026-02-23
�����V���f�W�^��
Why's our monitor labelling this an incident or hazard?
An AI system ('Claude') is explicitly mentioned and was used by Chinese companies in an unauthorized manner. The misuse involves extensive interactions via fake accounts to extract data for training other models, which constitutes a breach of usage rights and possibly intellectual property rights. This misuse has already occurred and involves harm in the form of violation of legal or contractual obligations related to the AI system's use. Therefore, this event qualifies as an AI Incident due to the realized violation of rights through the AI system's misuse.
Thumbnail Image

米、AI軍事利用協議へ|埼玉新聞|埼玉の最新ニュース・スポーツ・地域の話題

2026-02-23
��ʐV���b��ʂ̍ŐV�j���[�X�E�X�|�[�c�E�n��̘b��
Why's our monitor labelling this an incident or hazard?
The article mentions an AI system (Claude) developed by Anthropic and its military use, indicating AI system involvement. However, the event is about upcoming negotiations regarding the conditions of military use, with no actual harm or incident reported. Therefore, it represents a plausible future risk scenario related to AI military applications but does not describe any realized harm or incident. This fits the definition of an AI Hazard, as the military use of generative AI could plausibly lead to harms in the future, but no direct or indirect harm has yet occurred.
Thumbnail Image

米、AI軍事利用協議へ 制限緩和巡り対立 | 共同通信 ニュース | 沖縄タイムス+プラス

2026-02-23
沖縄タイムス+プラス
Why's our monitor labelling this an incident or hazard?
The event involves an AI system ('Claude') developed by Anthropic and its potential military use, including autonomous weapons and mass surveillance, which are areas with high potential for harm to human rights and safety. Since the article describes ongoing negotiations about easing restrictions that currently prevent such uses, and no actual harm has occurred yet, this situation constitutes an AI Hazard due to the plausible future risk of harm from these AI applications.
Thumbnail Image

中国企業が米AIを不正利用 自社製品の改良狙いか、クロード | 共同通信 ニュース | 沖縄タイムス+プラス

2026-02-23
沖縄タイムス+プラス
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the unauthorized and organized misuse of an AI system (Claude) by Chinese companies to enhance their own AI models, which directly breaches legal and intellectual property rights. This misuse of the AI system has already occurred and caused harm to the original AI developer's rights. Hence, it qualifies as an AI Incident under the category of violations of human rights or breach of obligations under applicable law, specifically intellectual property rights.
Thumbnail Image

中国企業が米AIを不正利用 自社製品の改良狙いか、クロード:経済:福島民友新聞社

2026-02-23
福島民友新聞社
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system ('Claude') and its outputs by other companies in an unauthorized manner, which is misuse of AI technology. While this misuse is significant and involves large-scale interaction, the article does not indicate that any harm has actually occurred yet. The misuse could plausibly lead to harm such as intellectual property rights violations or unfair market practices, but since no harm is reported, it fits the definition of an AI Hazard rather than an AI Incident. It is not Complementary Information because it reports a new misuse event, not an update or response to a prior incident. It is not Unrelated because it clearly involves AI misuse with potential consequences.
Thumbnail Image

米、AI軍事利用協議へ/制限緩和巡り対立 | 四国新聞社

2026-02-23
四国新聞社
Why's our monitor labelling this an incident or hazard?
The article describes the development and potential military use of an AI system with capabilities that could plausibly lead to significant harms, such as violations of human rights through mass surveillance and autonomous weapons deployment. Since these harms have not yet materialized but are clearly plausible given the context, this situation qualifies as an AI Hazard. There is no indication that harm has already occurred, so it is not an AI Incident. The article is not merely providing complementary information or unrelated news, as it highlights credible risks associated with the AI system's use.
Thumbnail Image

米関税引き上げの影響不透明、長期化も=テイラー中銀政策委員

2026-02-23
JP
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Anthropic's conversational AI "Claude") and its potential military use, which could plausibly lead to significant harms such as violations of human rights or escalation of conflict. Since no actual harm or incident is reported yet, but there is a credible risk associated with the AI's military application, this qualifies as an AI Hazard rather than an AI Incident. The event is not merely general AI news or a complementary update but a credible warning about potential future harm related to AI use.
Thumbnail Image

中国AI企業、「クロード」不正利用しモデル改良 アンソロピックが警告

2026-02-23
JP
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (Anthropic's Claude and the derived models) and their misuse (unauthorized use and model distillation without safety measures). Although no direct harm has yet occurred, the article highlights significant plausible future harms such as national security risks and uncontrolled spread of unsafe AI models. Therefore, this constitutes an AI Hazard because the misuse could plausibly lead to serious harms, but no actual harm is reported as having occurred yet.
Thumbnail Image

AI新興アンソロピック、中国企業がモデル「蒸留」と非難

2026-02-23
The Wall Street Journal - Japan
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (Anthropic's Claude model and the Chinese companies' AI systems). The alleged unauthorized use of Anthropic's AI model for training other AI systems constitutes a violation of intellectual property rights, which is a breach of obligations under applicable law protecting intellectual property rights. Since this misuse has already occurred and involves direct use of AI systems leading to a rights violation, it qualifies as an AI Incident under category (c).
Thumbnail Image

米国防総省高官、アンソロピック従わぬなら「法律で強制」

2026-02-24
日本経済新聞
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in the military domain and the potential enforcement of compliance by law, which relates to governance and societal response to AI. There is no indication of realized harm or an incident caused by AI, nor a direct or indirect link to injury, rights violations, or other harms. The article primarily discusses a policy position and potential future enforcement, which fits the category of Complementary Information as it provides context and updates on AI governance without reporting a new incident or hazard.
Thumbnail Image

アンソロピックCEO、信念か妥協か 米国防長官会談でAI契約解除危機

2026-02-24
日本経済新聞
Why's our monitor labelling this an incident or hazard?
The article involves an AI system developed by Anthropic, a leading AI company, and discusses its potential military use and related government contract risks. While no direct harm is reported, the situation reflects a credible risk of harm due to the military application of advanced AI and the political and ethical tensions around it. This constitutes an AI Hazard because the development and potential use of AI in military contexts could plausibly lead to harms such as violations of human rights or other significant harms. There is no indication that harm has already occurred, so it is not an AI Incident. The article is not merely complementary information as it focuses on the risk and conflict around AI military use and contract termination.
Thumbnail Image

中国企業、大量偽アカでAIモデル写し取る「蒸留」か...米新興企業が自社サービス不正利用されたと発表

2026-02-24
読売新聞オンライン
Why's our monitor labelling this an incident or hazard?
The event involves the use and misuse of an AI system (the conversational AI 'Claude') by multiple entities through fake accounts to replicate the AI model. This unauthorized use directly leads to a violation of intellectual property rights, which is a recognized harm under the AI Incident definition (c). Therefore, this qualifies as an AI Incident due to the direct involvement of AI system misuse causing a breach of legal and intellectual property rights.
Thumbnail Image

アンソロピックにAI安全策の撤廃を要求 ヘグセス米国防長官、軍事利用の拡大へ圧力

2026-02-24
産経ニュース
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (Claude) used by the US Department of Defense, with explicit mention of safety measures (safeguards) currently in place. The demand to remove these safeguards and pressure to expand military use, including potential autonomous weapons development, indicates a credible risk of future harm. Since no harm has yet occurred but the situation could plausibly lead to significant harm, this qualifies as an AI Hazard rather than an AI Incident. The event is not merely general AI news or a response to a past incident, so it is not Complementary Information.
Thumbnail Image

ヘグセス国防長官、AI軍事利用の制限撤廃求める 米新興CEOと会談:時事ドットコム

2026-02-25
時事ドットコム
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Claude) and discusses the removal of restrictions on its military use, which could plausibly lead to significant harms associated with AI military applications. No actual harm or incident is reported yet, but the government's pressure to lift restrictions highlights a credible risk of future AI-related harm. Hence, this is best classified as an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

米政府、AI新興CEO呼び出しか 軍事利用巡り協議へ:時事ドットコム

2026-02-24
時事ドットコム
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (Claude) and its military use, which is a sensitive and potentially high-risk application. However, no actual harm, malfunction, or incident has been reported. The event is about the potential implications and governance of AI military use, which could plausibly lead to harm but has not yet done so. Therefore, it fits the definition of an AI Hazard, as it concerns circumstances where the AI system's use could plausibly lead to an AI Incident in the future, especially given the military context and the ongoing dispute over usage restrictions.
Thumbnail Image

米国防長官、アンソロピックにAI軍事利用の制限撤回要求=報道

2026-02-24
ニューズウィーク日本版 オフィシャルサイト
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (Anthropic's Claude) and its potential military use, which is a context with credible risk of harm. Since no actual harm or incident has occurred yet, but the lifting of restrictions could plausibly lead to AI-related harms in the future, this qualifies as an AI Hazard. The event is about the potential for harm due to AI military use, not a realized incident or a complementary information update.
Thumbnail Image

アンソロピック、AI軍事利用の制限緩和しない意向=関係筋

2026-02-24
ニューズウィーク日本版 オフィシャルサイト
Why's our monitor labelling this an incident or hazard?
The article does not report any actual harm caused by AI systems, nor does it describe an event where AI use or malfunction has led or could plausibly lead to harm. Instead, it details a policy and ethical stance by Anthropic to restrict military applications of its AI, and the government's efforts to influence or regulate this stance. This fits the definition of Complementary Information, as it provides context on governance and societal responses to AI military use issues without describing a specific AI Incident or AI Hazard.
Thumbnail Image

米国防総省、AI企業アンソロピックに軍事利用制限の解除を要求 拒めば排除の脅し

2026-02-25
CNN.co.jp
Why's our monitor labelling this an incident or hazard?
The AI system Claude is central to the event, with the Department of Defense demanding removal of safety restrictions to allow military use. Anthropic's refusal is based on concerns about stability and ethical issues related to weapon control and mass surveillance. No actual harm has been reported yet, but the potential for harm is credible and significant, including misuse in weapons systems and surveillance violating rights. The threat of forced use under the Defense Production Act and blacklisting underscores the seriousness of the potential hazard. Since harm is not yet realized but plausible, this is classified as an AI Hazard rather than an Incident.
Thumbnail Image

米国のAI企業を中国のライバルが「パクリ」? アンソロピックやオープンAIが主張

2026-02-25
CNN.co.jp
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (Anthropic's Claude and OpenAI's ChatGPT) and their unauthorized use by Chinese AI companies through distillation, which is a form of AI model training. The misuse is alleged to violate intellectual property rights and raises national security concerns, indicating potential harm to communities and property. However, the article does not report any actual harm occurring yet, only the plausible risk of harm if these models are used maliciously. Thus, it fits the definition of an AI Hazard, as the development and use of these AI systems could plausibly lead to significant harm in the future. The event is not an AI Incident because no direct or indirect harm has materialized, nor is it complementary information or unrelated news.
Thumbnail Image

AIセーフガードの撤廃要求 米国防長官、軍事利用へ圧力

2026-02-24
神戸新聞
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Claude) used by the military and concerns the removal of safety safeguards to enable expanded military use, including autonomous weapons development. Although no direct harm has yet occurred, the coercive pressure to remove safeguards plausibly leads to significant future harm, such as violations of human rights or escalation of autonomous weapon use. Therefore, this is an AI Hazard rather than an AI Incident. The event is not merely complementary information because it reports a credible risk of harm due to AI system use in military contexts.
Thumbnail Image

AIセーフガードの撤廃要求 米国防長官、軍事利用へ圧力:東京新聞デジタル

2026-02-24
東京新聞 TOKYO Web
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Claude) used by the US military and discusses the demand to remove its safety safeguards to enable expanded military use, including potential autonomous weapons development and mass surveillance. These uses are associated with significant potential harms (human rights violations, harm to communities). Since the harms are not yet realized but are plausible and credible given the context, the event fits the definition of an AI Hazard. There is no indication that harm has already occurred, so it is not an AI Incident. The event is more than just complementary information because it reports a concrete government demand and threat of forced customization, indicating a credible risk of future harm.
Thumbnail Image

AIセーフガードの撤廃要求|埼玉新聞|埼玉の最新ニュース・スポーツ・地域の話題

2026-02-24
��ʐV���b��ʂ̍ŐV�j���[�X�E�X�|�[�c�E�n��̘b��
Why's our monitor labelling this an incident or hazard?
The AI system 'Claude' is explicitly mentioned as a deployed generative AI used by the U.S. Department of Defense. The demand to remove safety safeguards directly relates to the AI system's use and development, potentially increasing risks of harm due to reduced safety controls. Although no actual harm is reported yet, the removal of safeguards plausibly increases the risk of AI-related incidents, especially given the military context. Therefore, this event constitutes an AI Hazard because it plausibly could lead to harm through expanded military use without safety measures.
Thumbnail Image

【茨城新聞】AIセーフガードの撤廃要求

2026-02-24
茨城新聞社
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Claude) used by the U.S. military and discusses the removal of safety safeguards to expand military applications, including autonomous weapons development. Although no harm has yet occurred, the removal of safeguards and forced customization for military use plausibly could lead to serious harms such as violations of human rights and physical harm from autonomous weapons. Therefore, this event fits the definition of an AI Hazard, as it plausibly could lead to an AI Incident in the future. It is not an AI Incident because no harm has yet materialized, nor is it Complementary Information or Unrelated.
Thumbnail Image

ベネズエラ攻撃に軍用AIの衝撃、鍵握る企業を独自分析 Anthropic波紋呼ぶ

2026-02-24
日経クロステック(xTECH)
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (Anthropic's Claude) in a military operation that has already occurred, implying direct involvement of AI in a real-world conflict scenario. The use of AI in targeting or operational support in an attack constitutes a direct or indirect contribution to harm (potential injury or harm to persons, disruption of peace, and violation of ethical norms). Therefore, this qualifies as an AI Incident. The article does not merely discuss potential or future risks but reports on an actual event where AI was used in a military attack, fulfilling the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

米アンソロピックがAI安全基準緩和 競合相手が高性能なら開発止めず

2026-02-25
日本経済新聞
Why's our monitor labelling this an incident or hazard?
The event involves the use and development of AI systems, specifically Anthropic's AI models. The company's decision to relax safety standards increases the risk that AI systems developed may cause harm, such as misuse in autonomous weapons or other unsafe applications. Although no actual harm has yet occurred, the described circumstances plausibly could lead to AI incidents involving harm to people or violations of rights. Therefore, this constitutes an AI Hazard rather than an AI Incident, as the harm is potential and linked to the AI system's development and use.
Thumbnail Image

AI軍事利用拡大へ圧力 ヘグセス米長官、安全策の撤廃要求 契約解消も示唆

2026-02-25
産経ニュース
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Claude) used by the U.S. military, with the Department of Defense demanding removal of safety measures and threatening forced use under the Defense Production Act. This indicates the AI system's development and use are central to the event. Although no direct harm has been reported, the removal of safeguards and forced military use plausibly could lead to harms such as violations of human rights or escalation of autonomous weapons deployment. Since the harm is potential and not yet realized, this event fits the definition of an AI Hazard rather than an AI Incident. It is not merely complementary information because the focus is on the pressure to remove safeguards and the threat of forced use, which directly relates to plausible future harm.
Thumbnail Image

米国防総省、防衛関連企業にアンソロピック依存度の評価を要請

2026-02-26
ニューズウィーク日本版 オフィシャルサイト
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (Anthropic's AI technology) and its potential military use, which could plausibly lead to harms such as violations of human rights or disruption of critical infrastructure. The Department of Defense's request for evaluation and the designation of supply chain risk indicate concern about future risks rather than a realized incident. Since no direct or indirect harm has occurred yet, but there is a credible risk of harm, this event qualifies as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

米国防総省、防衛関連企業にアンソロピック依存度の評価を要請

2026-02-26
JP
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (Anthropic's AI) and its use in defense-related contexts, which could plausibly lead to significant harms such as violations of human rights or harm related to autonomous weapons. The Department of Defense's request for dependency evaluation indicates concern about potential risks but does not describe any realized harm or incident. Therefore, this event fits the definition of an AI Hazard, as it concerns plausible future harm stemming from the AI system's use or potential misuse, but no direct or indirect harm has yet occurred.
Thumbnail Image

アンソロピック、国防総省と決裂間近に AI軍事利用で対立

2026-02-27
日本経済新聞
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (Anthropic's AI) and its intended use in military contexts, which is a high-risk domain. The refusal to allow unlimited military use and the Department of Defense's retaliatory threats indicate a conflict over the AI's use. However, there is no indication that any harm has occurred yet, only a potential for harm given the military context. Therefore, this event is best classified as an AI Hazard, as it plausibly could lead to harm if the AI is used militarily without restrictions, but no incident has yet materialized.
Thumbnail Image

アンソロピック、米国防総省の要求を拒否 AI軍事利用巡り

2026-02-26
日本経済新聞
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (Anthropic's Claude) and its potential military use, which could plausibly lead to significant harms such as injury or violation of rights if used for autonomous weapons or surveillance. However, since the company has refused the demand and no actual use or harm has occurred yet, this constitutes a credible risk or hazard rather than an incident. Therefore, this event is best classified as an AI Hazard.
Thumbnail Image

AnthropicのCEO、米国防総省のAI規制撤廃要求を拒否 「自律型兵器への転用」を懸念

2026-02-27
ITmedia
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems in military applications and discusses the refusal to remove safety measures due to the risk of AI being used for fully autonomous weapons and large-scale surveillance, which could plausibly lead to significant harms including threats to human life and violations of rights. Since no actual harm has been reported but the risk is credible and significant, this qualifies as an AI Hazard. The event is not an AI Incident because no realized harm has occurred, nor is it Complementary Information or Unrelated.
Thumbnail Image

米アンソロピック、AIの兵器利用めぐる国防総省の「脅し」要求拒絶:朝日新聞

2026-02-27
朝日新聞デジタル
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (Anthropic's Claude) and its development and use policies. The U.S. Department of Defense's demand to relax safety measures for military use implies a potential for AI misuse in autonomous weapons or surveillance, which could plausibly lead to harms such as violations of human rights or harm to communities. Since no actual harm or incident has occurred yet, and the focus is on the potential for harm and ethical governance, this fits the definition of an AI Hazard rather than an AI Incident. It is not merely complementary information because the main focus is on the potential for harm and the company's refusal to comply, which is a direct AI-related risk scenario.
Thumbnail Image

AI安全基準を緩和 開発・公開停止せず -- 米新興アンソロピック:時事ドットコム

2026-02-26
時事ドットコム
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (Anthropic's Claude model) and discusses changes in safety protocols related to its development. However, there is no indication that any harm has occurred or that the change has directly or indirectly led to injury, rights violations, or other harms. The removal of safety halt provisions could plausibly increase future risks, but the article does not report any realized harm or incident. Therefore, this event is best classified as an AI Hazard, reflecting a credible potential for future harm due to relaxed safety standards in AI development.
Thumbnail Image

米国防長官の要求拒否 AIの軍事利用制限巡り -- アンソロピック:時事ドットコム

2026-02-27
時事ドットコム
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses an AI system (Anthropic's Claude) developed and used under contract by the U.S. Department of Defense. The CEO's refusal to lift military use restrictions is motivated by concerns about potential harms from AI-enabled autonomous weapons and mass surveillance. Although the AI is already in use within military systems, the article does not report any realized harm or incident resulting from this use. Instead, it highlights the risk and ethical concerns about possible future harms from military applications of AI. This fits the definition of an AI Hazard, as the event plausibly could lead to harms (e.g., violations of human rights, autonomous lethal actions) but no direct or indirect harm has yet occurred or been reported. The event is not Complementary Information because it is not an update or response to a past incident but a current refusal related to potential harm. It is not an AI Incident because no harm has materialized.
Thumbnail Image

米政府、AI企業アンソロピックに「戦争協力」の最後通牒。自律型殺傷兵器への分岐点

2026-02-26
ニューズウィーク日本版 オフィシャルサイト
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Anthropic's AI model Claude) and concerns its use in autonomous lethal weapons, which are AI-enabled systems with high potential for misuse and harm. The article does not report any actual harm yet but highlights a government ultimatum forcing the company to accept military use, including autonomous weapons. This situation plausibly leads to an AI Incident in the future due to the known risks of autonomous lethal weapons. Since harm is not yet realized but the risk is credible and imminent, the event is best classified as an AI Hazard.
Thumbnail Image

アンソロピックCEO、AI軍事利用巡る米国防総省の要求拒否

2026-02-27
ニューズウィーク日本版 オフィシャルサイト
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems developed by Anthropic and their potential military use, which could plausibly lead to harms such as violations of human rights or harm to communities if used in autonomous weapons or surveillance. The refusal to remove safety measures is a response to prevent such harms. Since no actual harm has been reported yet but the potential for significant harm exists, this qualifies as an AI Hazard rather than an AI Incident. The event is not merely general AI news or a complementary update but a clear case of a credible risk of future AI-related harm due to military applications.
Thumbnail Image

米企業、AI軍事利用拡大を拒否|埼玉新聞|埼玉の最新ニュース・スポーツ・地域の話題

2026-02-27
��ʐV���b��ʂ̍ŐV�j���[�X�E�X�|�[�c�E�n��̘b��
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (the generative AI "Claude") and its potential military use, which is a significant AI-related governance issue. However, there is no indication that the AI system's development, use, or malfunction has directly or indirectly caused harm or that harm is imminent. The event is about a company's policy decision and government pressure, which is a societal and governance response to AI military use concerns. Therefore, it fits the category of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

米企業、AI軍事利用拡大を拒否 国防総省の安全策撤廃要求に対し | 共同通信 ニュース | 沖縄タイムス+プラス

2026-02-27
沖縄タイムス+プラス
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Claude) and its development and use in military contexts. The refusal to remove safety safeguards to prevent military expansion of AI use indicates concern about potential misuse. Although no direct harm has occurred yet, the potential for AI to be used in autonomous weapons or mass surveillance constitutes a credible risk of harm. Thus, the event is best classified as an AI Hazard, reflecting plausible future harm from the AI system's military application.
Thumbnail Image

米企業、AI軍事利用拡大を拒否 国防総省の安全策撤廃要求に対し|秋田魁新報電子版

2026-02-27
秋田魁新報電子版
Why's our monitor labelling this an incident or hazard?
The article describes an AI system (Claude) developed by Anthropic, currently used in military contexts. The DoD's demand to remove safety measures to expand military use raises credible risks of future harms such as autonomous weapons deployment or mass surveillance, which are serious human rights and community harms. Since no actual harm has yet occurred or been reported, but the potential for harm is credible and significant, this qualifies as an AI Hazard rather than an AI Incident. The company's refusal to remove safeguards is a mitigating factor but does not eliminate the plausible risk posed by the military use of AI without safety measures.
Thumbnail Image

米企業、AI軍事利用拡大を拒否 | 岩手日報ONLINE

2026-02-27
IWATE NIPPO 岩手日報
Why's our monitor labelling this an incident or hazard?
The AI system 'Claude' is explicitly mentioned and is used in military applications, indicating AI system involvement. The refusal to remove safety safeguards relates to the use and potential misuse of the AI system in military contexts, which could lead to significant harms such as violations of human rights or harm from autonomous weapons. However, no actual harm has been reported yet; the event concerns a dispute over safety measures and potential future military use. Therefore, this constitutes an AI Hazard, as the development and potential use of the AI system in military applications could plausibly lead to harms, but no incident has occurred yet.
Thumbnail Image

米企業、AI軍事利用拡大を拒否 国防総省の安全策撤廃要求に対し:主要:福島民友新聞社

2026-02-27
福島民友新聞社
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Claude) and its potential military use, which could plausibly lead to significant harms such as violations of human rights or harm from autonomous weapons. However, no actual harm or incident has occurred yet; the company is resisting the removal of safeguards to prevent such harms. Therefore, this is a credible risk scenario where the AI system's use could plausibly lead to an AI Incident if safeguards are removed and military use expands. Hence, it qualifies as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

GoogleやOpenAIの社員が「軍事AIに制限を」 アンソロピックに連帯

2026-02-27
日本経済新聞
Why's our monitor labelling this an incident or hazard?
The article focuses on a public letter signed by employees advocating for restrictions on military AI use, which is a governance and societal response. There is no description of an AI system causing harm or a specific event where AI use has led or could plausibly lead to harm. The event is about raising concerns and pushing for policy changes, fitting the definition of Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

Anthropic AI、国防総省を巡るペンタゴンの安全対策と衝突

2026-02-27
The Cryptonomist
Why's our monitor labelling this an incident or hazard?
The article centers on the development and deployment of advanced AI systems by Anthropic for military use, with a dispute over safety restrictions that could prevent misuse in autonomous weapons or mass surveillance. Although no actual harm has been reported, the conflict involves credible risks of future harm due to potential misuse or overreach of AI capabilities in defense operations. The presence of AI systems is explicit, and the disagreement over safety constraints directly relates to the plausible future harm these systems could cause. Since no realized harm is described, this does not meet the criteria for an AI Incident. Instead, it fits the definition of an AI Hazard, as the event plausibly could lead to significant harms if unresolved.