AI-Driven Targeting in Iran Leads to Civilian Harm and Raises Global Concerns

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

The United States and Israel used advanced AI systems, including Project Maven, to rapidly identify and attack over a thousand targets in Iran, resulting in civilian casualties and the death of Iran's supreme leader. Reports highlight that algorithmic errors in AI-driven targeting accelerated attacks and contributed to wrongful strikes on civilian sites.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly describes the use of AI systems in military targeting and attack execution, which directly led to harm including civilian deaths and destruction. The involvement of AI in accelerating decision-making and target selection is clear, and the reported errors in AI algorithms plausibly caused wrongful attacks on civilian sites. These outcomes constitute injury and harm to people and potential violations of human rights, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, and the AI systems' role is pivotal in the chain of events leading to these harms.[AI generated]
AI principles
SafetyRobustness & digital security

Industries
Government, security, and defence

Affected stakeholders
General publicGovernment

Harm types
Physical (death)Human or fundamental rights

Severity
AI incident

AI system task:
Recognition/object detectionGoal-driven organisation


Articles about this incident or hazard

Thumbnail Image

Guerra en Irán: Uso de IA agilizó ataques contra la República Islámica; advierten riesgos por errores algorítmicos

2026-03-05
El Universal
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the use of AI systems in military targeting and attack execution, which directly led to harm including civilian deaths and destruction. The involvement of AI in accelerating decision-making and target selection is clear, and the reported errors in AI algorithms plausibly caused wrongful attacks on civilian sites. These outcomes constitute injury and harm to people and potential violations of human rights, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, and the AI systems' role is pivotal in the chain of events leading to these harms.
Thumbnail Image

Uso de IA en ofensiva contra Irán genera alerta por posibles fallas en selección de objetivos

2026-03-06
Gestión
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems being used in offensive military operations, including target selection and attack execution, which have resulted in civilian deaths and destruction. The harm to civilians and potential misidentification of targets due to AI errors constitute injury and harm to groups of people and harm to communities. Therefore, this qualifies as an AI Incident because the AI system's use has directly or indirectly led to significant harm. The presence of AI in the decision-making chain and the resulting casualties meet the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

El uso de IA en ataques a Irán preocupa por posibles errores en la selección de objetivos

2026-03-05
Diario La Tribuna
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems used in military targeting operations that have directly resulted in civilian deaths and destruction, fulfilling the criteria for an AI Incident. The AI systems' use in selecting targets and accelerating attacks has led to real harm (civilian casualties), and errors in AI decision-making plausibly contributed to wrongful targeting. This constitutes injury and harm to people and communities, as well as potential violations of human rights. The AI system's development and use are central to the event, and the harm is realized, not just potential. Hence, the classification as AI Incident is appropriate.
Thumbnail Image

El uso de IA en ataques a Irán preocupa por posibles errores en la selección de objetivos

2026-03-05
Noticias SIN
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly used for target identification and attack execution in military operations. The article describes actual harm resulting from these AI-driven decisions, including civilian deaths and misidentification of non-military sites as targets. This meets the criteria for an AI Incident because the AI's use has directly and indirectly led to injury and harm to people. The involvement of AI in accelerating and automating lethal decisions with insufficient human oversight further supports this classification.
Thumbnail Image

El uso de IA en ataques a Irán preocupa por posibles errores en la selección de objetivos

2026-03-06
UDG TV
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Project Maven) used operationally to identify military targets, which has directly influenced attacks resulting in harm or risk of harm to civilians. The article reports actual use and consequences, including a mistaken target that could cause civilian harm. This meets the definition of an AI Incident, as the AI system's use has directly or indirectly led to harm to persons and communities. The presence of human review does not negate the AI's pivotal role in accelerating and enabling these attacks. Therefore, the event is classified as an AI Incident.