
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
On February 28, 2026, a US military AI system, reportedly Claude, caused a fatal targeting error during a missile strike in Minab, Iran, hitting a girls' school and killing 165–180 civilians. The incident highlights the risks of AI use in warfare and the consequences of outdated data and maps.[AI generated]
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (Anthropic's AI models) and concerns their use in sensitive military and security domains. The conflict arises from the company's refusal to permit unrestricted military use, which could plausibly lead to harms related to autonomous weapons or mass surveillance if such AI systems were used without ethical constraints. Although no direct harm or incident has occurred, the dispute and exclusion reflect a credible risk scenario about AI deployment in critical infrastructure and defense, fitting the definition of an AI Hazard. The article does not report any realized injury, rights violation, or disruption caused by the AI systems, so it is not an AI Incident. It is also not merely complementary information or unrelated, as the core focus is on the potential risks and governance challenges of AI use in military contexts.[AI generated]