US Uses Anthropic AI in Lethal Military Strikes on Iran

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

During Operation Epic Fury, the US military used Anthropic's AI services, including Claude tools, alongside B-2 bombers and drones in strikes against Iranian military infrastructure. The AI's specific role is unclear, but its deployment contributed to lethal operations causing significant harm in Iran.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly states that Anthropic's AI tools were used by the US military in strikes that caused deaths and destruction in Iran. The harm is direct and significant, involving loss of life and damage to property and communities. The AI system's involvement in the military operation that led to these harms qualifies this as an AI Incident. The lack of detail on how the AI was used does not negate the fact that its use was part of an operation causing harm. Therefore, this event meets the criteria for an AI Incident.[AI generated]
AI principles
AccountabilityRespect of human rights

Industries
Government, security, and defence

Affected stakeholders
Government

Harm types
Physical (death)

Severity
AI incident

Business function:
Other


Articles about this incident or hazard

Thumbnail Image

US used Anthropic AI in strikes against Iran: Report

2026-03-02
Rediff.com India Ltd.
Why's our monitor labelling this an incident or hazard?
The article explicitly states that Anthropic's AI tools were used by the US military in strikes that caused deaths and destruction in Iran. The harm is direct and significant, involving loss of life and damage to property and communities. The AI system's involvement in the military operation that led to these harms qualifies this as an AI Incident. The lack of detail on how the AI was used does not negate the fact that its use was part of an operation causing harm. Therefore, this event meets the criteria for an AI Incident.
Thumbnail Image

Operation Epic Fury: US uses AI, stealth bombers and suicide drones against Iran | Check full list

2026-03-02
India TV News
Why's our monitor labelling this an incident or hazard?
The operation explicitly mentions the use of suicide drones and low-cost attack drones modeled after Iranian designs, which are likely AI-enabled for autonomous or semi-autonomous operation. The use of these AI systems in military strikes directly causes harm to property and potentially to people, fulfilling the criteria for an AI Incident. The event involves the use of AI systems in a harmful context (military attack), with direct consequences, not merely a potential or hypothetical risk. Therefore, it is classified as an AI Incident.
Thumbnail Image

US Deploys B-2 Bombers, Suicide Drones, Anthropic AI in Iran Strikes

2026-03-02
Deccan Chronicle
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of Anthropic's AI services in a military strike that caused deaths, including a high-profile target. Although the exact function of the AI is not detailed, its deployment in lethal operations implies its outputs or support contributed to the harm. This meets the criteria for an AI Incident as the AI system's use directly or indirectly led to harm to persons and communities. The involvement of AI in lethal autonomous or semi-autonomous weapon systems or decision support in targeting is a recognized source of AI-related harm.
Thumbnail Image

World News | US Used B-2 Bombers, Suicide Drones, Anthropic AI in Strikes Against Iran: Report | LatestLY

2026-03-02
LatestLY
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (Anthropic's Claude tools) in a military operation that resulted in lethal strikes against Iran, causing death and harm. The AI system's involvement in the development or execution of these strikes directly led to harm to persons and communities, fulfilling the criteria for an AI Incident. The article explicitly mentions AI use in the attack, and the harm is realized, not just potential.
Thumbnail Image

Artificial intelligence, stealth jets and suicide drones: Inside the US assault on Iran

2026-03-02
The Statesman
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI tools being used in a military operation that caused harm (deaths and destruction). However, the AI's role is not detailed or shown to be a direct or indirect cause of harm; the harm stems from the military strikes themselves. The AI involvement is reported but not linked to malfunction, misuse, or a causal chain leading to harm. Thus, it does not meet the criteria for an AI Incident or AI Hazard. Instead, it provides additional context about AI's integration in military operations, fitting the definition of Complementary Information.
Thumbnail Image

US used B-2 bombers, suicide drones, Anthropic AI in strikes against Iran: Report

2026-03-02
KalingaTV
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI services from Anthropic in a military strike operation that caused lethal harm and destruction. The AI system's involvement in the use of lethal force and strategic targeting in warfare directly links it to harm to persons and communities, fulfilling the criteria for an AI Incident. Although the exact role of the AI tools is not detailed, their use in the operation that led to deaths and destruction is sufficient to classify this as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

From B-2 Bombers To Anthropic AI: What US Unleashed In Deadly 'Operation Epic Fury' Against Iran

2026-03-02
thedailyjagran.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI services in a military operation that caused harm to Iranian military infrastructure and leadership, which falls under harm to persons and property. Although the precise function of the AI tools is not detailed, their deployment in a lethal strike indicates AI's involvement in causing harm. Therefore, this event meets the criteria for an AI Incident due to the direct or indirect role of AI in harm resulting from the operation.
Thumbnail Image

مقتل خامنئي؟.. العقل المدبر ليس بشرياً

2026-03-02
البيان
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system ('Claude') in planning and executing a military strike that resulted in the death of a person, which is a direct harm to human life. The AI system was used to analyze complex data and simulate attack scenarios, playing a pivotal role in the operation's success. This meets the definition of an AI Incident as the AI system's use directly led to injury or harm to a person. The involvement is in the use phase of the AI system, and the harm is realized and significant. Hence, the event is classified as an AI Incident.
Thumbnail Image

الجيش الأميركي استخدم "كلود" في إيران رغم وقفه من إدارة ترامب

2026-03-01
العربي الجديد
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly named (Claude) used by the U.S. military in intelligence and operational contexts that led to real harm (deaths in a military operation). The AI's use was against the provider's terms and official government directives, indicating misuse or failure to comply with legal and ethical frameworks. The AI system's outputs influenced targeting and operational decisions, directly contributing to harm to people, fulfilling the criteria for an AI Incident. The article also mentions ongoing use despite bans, reinforcing the direct link to harm rather than a potential future risk or mere complementary information.
Thumbnail Image

الجيش الأميركي يستخدم أداة ذكاء اصطناعي ضد إيران

2026-03-01
Alrai-media
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system in military operations that have led to an airstrike in Iran, implying direct harm to persons and infrastructure. The AI system is used for intelligence and targeting, which are critical functions that influence physical harm and conflict outcomes. This fits the definition of an AI Incident, as the AI system's use has directly led to harm (injury or harm to persons and disruption of critical infrastructure). The article also notes the ongoing use and replacement of AI tools in military contexts, but the primary focus is on the realized harm from AI-assisted military action, not just potential or complementary information.
Thumbnail Image

لضرب إيران.. الجيش الأمريكي استخدم أداة ذكاء اصطناعي حظرها ترامب - الوطن

2026-03-01
جريدة الوطن
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system ('Claude') in military operations that have directly led to harm (airstrikes in Iran). The AI system was used in target identification and battle simulations, which are critical to the execution of military attacks. This constitutes direct involvement of AI in causing harm to people and property, fulfilling the criteria for an AI Incident. The article does not merely discuss potential or future harm but describes actual use in operations with real consequences. Therefore, the classification is AI Incident.
Thumbnail Image

اخبارك نت | لضرب إيران.. جيش أميركا استخدم أداة ذكاء اصطناعي حظرها ترامب

2026-03-01
موقع أخبارك للأخبار المصرية
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system ('Claude') in military operations that have caused direct harm, such as airstrikes and capture missions. The AI system is used for intelligence and targeting, which are critical to the execution of these operations. The involvement of the AI system in decisions leading to physical harm to persons and potential disruption of critical infrastructure meets the criteria for an AI Incident. The article describes realized harm resulting from the AI system's use, not just potential harm, and thus it is not merely a hazard or complementary information. Therefore, the classification as an AI Incident is justified.
Thumbnail Image

رغم قرار الحظر.. أدوات أنثروبيك شاركت في عملية عسكرية أمريكية ضد إيران - وكالة ستيب نيوز

2026-03-01
وكالة ستيب نيوز
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system ('Claude' by Anthropic) in military operations that have led to harm, including an airstrike against Iran and the capture of Venezuela's president. The AI system was used for intelligence analysis and operational planning, directly influencing military actions. This constitutes an AI Incident because the AI system's use has directly led to harm (military conflict and associated consequences). The article also highlights the security and political controversies around the AI tool's use, but the primary focus is on realized harm through military operations, not just potential or policy issues.
Thumbnail Image

Il Pentagono ha usato Anthropic negli attacchi in Iran nonostante il divieto imposto da Trump

2026-03-01
Corriere della Sera
Why's our monitor labelling this an incident or hazard?
The AI system Claude was actively used in military operations for intelligence and target selection, which directly implicates it in decisions that can cause injury or death and violate human rights. The article explicitly states the AI's role in these operations and the associated risks, fulfilling the criteria for an AI Incident. Although the company Anthropic imposed ethical limits, the Pentagon's use of the AI in lethal contexts and the ongoing debate about autonomous weapons highlight realized or imminent harm. Hence, this is not merely a potential hazard or complementary information but a clear AI Incident.
Thumbnail Image

++ 'Il Pentagono ha usato Anthropic per l'attacco in Iran nonostante il divieto' ++ - Notizie - Ansa.it

2026-03-01
ANSA.it
Why's our monitor labelling this an incident or hazard?
The AI system Claude was used in the development and execution of a military attack, specifically for intelligence and target identification, which directly relates to harm (potential injury or death) to persons and disruption of critical infrastructure. The use of AI in this context, especially against explicit prohibitions, constitutes an AI Incident because the AI system's use directly contributed to actions causing harm. The article describes realized use leading to harm, not just potential risk, so it is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Il Pentagono ha usato Anthropic per attaccare l'Iran. La start up era stata bandita dal Pentagono dopo l'offerta "sui limiti etici" di Hegseth

2026-03-01
Open
Why's our monitor labelling this an incident or hazard?
The AI system Claude was used by the Pentagon for military intelligence and targeting in an attack on Iran, which constitutes direct involvement of AI in an event causing harm to persons and potentially violating legal and ethical standards. The use occurred despite an official ban citing ethical risks, highlighting misuse or disregard of governance frameworks. This meets the criteria for an AI Incident as the AI system's use directly led to harm and breaches of obligations.
Thumbnail Image

Come è stato ucciso Khamenei: l'intervento della Cia, le bombe di Israele e l'uso di Anthropic

2026-03-01
Today
Why's our monitor labelling this an incident or hazard?
The article explicitly states that the Pentagon used Anthropic's AI system for intelligence and operational support in a military strike that killed Khamenei and others. The AI system's outputs were pivotal in identifying targets and timing the attack, which directly led to harm (deaths). This meets the definition of an AI Incident because the AI system's use directly led to injury or harm to persons. The involvement is not speculative or potential but actual and consequential. Hence, the event is classified as an AI Incident.
Thumbnail Image

Il Pentagono ha usato l'AI di Anthropic per supportare gli attacchi in Iran (nonostante il divieto di Trump)

2026-03-01
MRW.it
Why's our monitor labelling this an incident or hazard?
The article explicitly states that the Pentagon used an AI system (Anthropic's Claude) for intelligence analysis and target selection in military operations that resulted in the death of an individual. This is a clear case where the AI system's use directly led to harm to persons, fulfilling the criteria for an AI Incident. The involvement is not hypothetical or potential but actual and consequential. The ethical concerns and government ban further underscore the significance of the harm caused. Hence, the event is classified as an AI Incident.