AI Targeting Error Leads to Civilian Deaths in Iran

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

On February 28, 2026, a US military AI system, reportedly Claude, caused a fatal targeting error during a missile strike in Minab, Iran, hitting a girls' school and killing 165–180 civilians. The incident highlights the risks of AI use in warfare and the consequences of outdated data and maps.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves AI systems explicitly (Anthropic's AI models) and concerns their use in sensitive military and security domains. The conflict arises from the company's refusal to permit unrestricted military use, which could plausibly lead to harms related to autonomous weapons or mass surveillance if such AI systems were used without ethical constraints. Although no direct harm or incident has occurred, the dispute and exclusion reflect a credible risk scenario about AI deployment in critical infrastructure and defense, fitting the definition of an AI Hazard. The article does not report any realized injury, rights violation, or disruption caused by the AI systems, so it is not an AI Incident. It is also not merely complementary information or unrelated, as the core focus is on the potential risks and governance challenges of AI use in military contexts.[AI generated]
AI principles
SafetyRespect of human rights

Industries
Government, security, and defence

Affected stakeholders
ChildrenGeneral public

Harm types
Physical (death)

Severity
AI hazard

AI system task:
Reasoning with knowledge structures/planningGoal-driven organisation


Articles about this incident or hazard

Thumbnail Image

Anthropic è davvero più etica delle altre aziende AI? È complicato

2026-03-18
Tom's Hardware
Why's our monitor labelling this an incident or hazard?
The article centers on Anthropic's internal policy changes and its stance against certain military uses of its AI model Claude, including the removal of a safety veto and the resulting political and commercial consequences. These are governance and ethical issues about AI development and deployment but do not describe an AI Incident or AI Hazard as defined. There is no report of realized harm or a credible imminent risk of harm caused by the AI system itself. The content is best classified as Complementary Information because it provides important context and insight into AI governance, ethical challenges, and industry responses, enhancing understanding of the AI ecosystem without reporting a new incident or hazard.
Thumbnail Image

Quando l'AI rifiuta la guerra: Anthropic sfida il Pentagono in tribunale

2026-03-18
Tom's Hardware
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (Anthropic's AI models) and concerns their use in sensitive military and security domains. The conflict arises from the company's refusal to permit unrestricted military use, which could plausibly lead to harms related to autonomous weapons or mass surveillance if such AI systems were used without ethical constraints. Although no direct harm or incident has occurred, the dispute and exclusion reflect a credible risk scenario about AI deployment in critical infrastructure and defense, fitting the definition of an AI Hazard. The article does not report any realized injury, rights violation, or disruption caused by the AI systems, so it is not an AI Incident. It is also not merely complementary information or unrelated, as the core focus is on the potential risks and governance challenges of AI use in military contexts.
Thumbnail Image

Anthropic bloccherà l'accesso a Claude per uso militare?

2026-03-18
Punto Informatico
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Claude) and concerns about its use and control in military applications. The government's claim that Anthropic might disable or modify Claude during military operations indicates a potential risk of harm to national security, which fits the definition of an AI Hazard. There is no indication that any harm has yet occurred, only that the AI system's use could plausibly lead to harm. The article also covers legal and governance responses to this risk, but the primary focus is on the potential threat posed by the AI system's use or control, not on an actual incident or complementary information about past events.
Thumbnail Image

Le linee rosse dell'IA che non dovremmo oltrepassare (di D. Belli e P. Nemitz) | TPI

2026-03-18
TPI
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system (Claude) in military targeting that led to a fatal error causing civilian deaths, which is a direct harm to people. This meets the definition of an AI Incident as the AI system's use directly contributed to injury and loss of life. The article also discusses the broader implications and risks of AI in military and surveillance contexts, but the primary focus is on the realized harm from the targeting error. Therefore, the event is classified as an AI Incident.
Thumbnail Image

Le aziende di IA assumono esperti di armi per evitare usi catastrofici

2026-03-18
euronews
Why's our monitor labelling this an incident or hazard?
The involvement of AI systems is explicit, as the companies are developing and deploying advanced AI models (e.g., Claude by Anthropic) that could be misused for harmful purposes such as chemical weapons or autonomous weapons. The article centers on the potential risks and the companies' efforts to mitigate these risks by hiring experts and setting strict usage policies. Since no actual harm has been reported but the potential for catastrophic misuse exists, this event fits the definition of an AI Hazard rather than an Incident. It is not merely complementary information because the main focus is on the plausible future harm and risk management related to AI misuse in weapons contexts.
Thumbnail Image

Il Pentagono sviluppa nuove AI dopo lo stop ad Anthropic per clausole etiche

2026-03-19
Blasting News
Why's our monitor labelling this an incident or hazard?
The article involves AI systems explicitly (Claude, large language models) and their use in military contexts. However, it does not report any direct or indirect harm caused by these AI systems, nor does it describe a plausible future harm event. Instead, it discusses the breakdown of a contract due to ethical clauses, the Pentagon's response, legal disputes, and industry reactions. These are governance and strategic developments that provide context and updates on AI ecosystem dynamics. Hence, it fits the definition of Complementary Information rather than an Incident or Hazard.