AI-Driven Targeting in Iran Leads to Civilian Harm and Raises Global Concerns

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

The United States and Israel used advanced AI systems, including Project Maven, to rapidly identify and attack over a thousand targets in Iran, resulting in civilian casualties and the death of Iran's supreme leader. Reports highlight that algorithmic errors in AI-driven targeting accelerated attacks and contributed to wrongful strikes on civilian sites.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly describes the use of AI systems in military targeting and attack execution, which directly led to harm including civilian deaths and destruction. The involvement of AI in accelerating decision-making and target selection is clear, and the reported errors in AI algorithms plausibly caused wrongful attacks on civilian sites. These outcomes constitute injury and harm to people and potential violations of human rights, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, and the AI systems' role is pivotal in the chain of events leading to these harms.[AI generated]
AI principles
SafetyRobustness & digital security

Industries
Government, security, and defence

Affected stakeholders
General publicGovernment

Harm types
Physical (death)Human or fundamental rights

Severity
AI incident

AI system task:
Recognition/object detectionGoal-driven organisation


Articles about this incident or hazard

Thumbnail Image

Guerra en Irán: Uso de IA agilizó ataques contra la República Islámica; advierten riesgos por errores algorítmicos

2026-03-05
El Universal
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the use of AI systems in military targeting and attack execution, which directly led to harm including civilian deaths and destruction. The involvement of AI in accelerating decision-making and target selection is clear, and the reported errors in AI algorithms plausibly caused wrongful attacks on civilian sites. These outcomes constitute injury and harm to people and potential violations of human rights, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, and the AI systems' role is pivotal in the chain of events leading to these harms.
Thumbnail Image

Uso de IA en ofensiva contra Irán genera alerta por posibles fallas en selección de objetivos

2026-03-06
Gestión
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems being used in offensive military operations, including target selection and attack execution, which have resulted in civilian deaths and destruction. The harm to civilians and potential misidentification of targets due to AI errors constitute injury and harm to groups of people and harm to communities. Therefore, this qualifies as an AI Incident because the AI system's use has directly or indirectly led to significant harm. The presence of AI in the decision-making chain and the resulting casualties meet the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

El uso de IA en ataques a Irán preocupa por posibles errores en la selección de objetivos

2026-03-05
Diario La Tribuna
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems used in military targeting operations that have directly resulted in civilian deaths and destruction, fulfilling the criteria for an AI Incident. The AI systems' use in selecting targets and accelerating attacks has led to real harm (civilian casualties), and errors in AI decision-making plausibly contributed to wrongful targeting. This constitutes injury and harm to people and communities, as well as potential violations of human rights. The AI system's development and use are central to the event, and the harm is realized, not just potential. Hence, the classification as AI Incident is appropriate.
Thumbnail Image

El uso de IA en ataques a Irán preocupa por posibles errores en la selección de objetivos

2026-03-05
Noticias SIN
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly used for target identification and attack execution in military operations. The article describes actual harm resulting from these AI-driven decisions, including civilian deaths and misidentification of non-military sites as targets. This meets the criteria for an AI Incident because the AI's use has directly and indirectly led to injury and harm to people. The involvement of AI in accelerating and automating lethal decisions with insufficient human oversight further supports this classification.
Thumbnail Image

El uso de IA en ataques a Irán preocupa por posibles errores en la selección de objetivos

2026-03-06
UDG TV
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Project Maven) used operationally to identify military targets, which has directly influenced attacks resulting in harm or risk of harm to civilians. The article reports actual use and consequences, including a mistaken target that could cause civilian harm. This meets the definition of an AI Incident, as the AI system's use has directly or indirectly led to harm to persons and communities. The presence of human review does not negate the AI's pivotal role in accelerating and enabling these attacks. Therefore, the event is classified as an AI Incident.
Thumbnail Image

El uso de IA en ataques a Irán preocupa por posibles errores en la selección de objetivos

2026-03-06
El Comercio Perú
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems being used in the selection and prioritization of military targets, with documented tragic outcomes including civilian deaths. The AI's role in accelerating decision-making and potential errors causing harm to civilians fits the definition of an AI Incident, as the AI system's use has directly led to injury and harm to groups of people. The presence of AI in the operational chain and the resulting harm to human life and communities confirms this classification.
Thumbnail Image

¿Por qué preocupa el uso de la IA en los ataques a Irán por parte de EU e Israel?

2026-03-06
Vanguardia
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems being used in the development and use phases for military targeting and attack execution. The involvement of AI has directly contributed to harm, including civilian deaths and potential wrongful targeting due to algorithmic errors or insufficient human supervision. These harms fall under injury to persons and harm to communities, which are recognized AI Incident categories. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Uso de IA en ataques a Irán: ¿Riesgo de tragedias civiles?

2026-03-07
7dias.com.do
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems being used in military targeting and attack operations, which have resulted in civilian deaths, fulfilling the criteria for an AI Incident. The harm is direct and significant (loss of civilian lives), and the AI's role in accelerating and possibly causing errors in targeting is central to the event. This meets the definition of an AI Incident as the AI system's use has directly led to harm to people and communities.
Thumbnail Image

AI at war: 5 things to know about Project Maven

2026-04-06
Dawn
Why's our monitor labelling this an incident or hazard?
Project Maven is explicitly described as an AI system used for targeting and battlefield management that has been operational and involved in US strikes against Iran. The AI system's outputs directly influence lethal military decisions, leading to harm to persons and communities, fulfilling the criteria for an AI Incident. The article discusses the development, use, and ethical controversies surrounding the AI system, confirming its pivotal role in causing harm. Hence, it is not merely a hazard or complementary information but an incident involving realized harm linked to AI.
Thumbnail Image

Operation Epic Fury uses AI battlefield management to hit hundreds of targets in hours

2026-04-05
GEO TV
Why's our monitor labelling this an incident or hazard?
Project Maven is explicitly described as an AI system that processes sensor and satellite data to identify and prioritize targets for strikes. The system's involvement in the US military strikes against Iran, which resulted in the deaths and injuries of civilians including children, shows direct harm caused by the AI system's use. This meets the definition of an AI Incident as the AI system's use has directly led to injury and harm to groups of people. The article provides concrete examples of harm (civilian casualties) linked to the AI-assisted targeting, confirming the classification as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Detection To Destruction: Pentagon's 'Project Maven' Is AI-Assistant In War

2026-04-05
NDTV
Why's our monitor labelling this an incident or hazard?
Project Maven is explicitly described as an AI system used in military targeting and battlefield management. Its use has directly contributed to military strikes causing loss of life, including civilian casualties, which constitutes harm to people. The AI system's role in accelerating the kill chain and selecting targets is pivotal to these harms. Therefore, this event qualifies as an AI Incident due to the realized harm caused by the AI system's deployment in warfare.
Thumbnail Image

How The Pentagon Is Using AI To Reshape Modern Warfare: Project Maven Explained

2026-04-06
News18
Why's our monitor labelling this an incident or hazard?
Project Maven is explicitly described as an AI system that processes surveillance data and supports targeting decisions in warfare. The article reports actual harm resulting from strikes where the system likely played a central role, including civilian casualties. This constitutes direct harm to people caused by the use of an AI system, meeting the definition of an AI Incident. The ethical debates and operational details further confirm the AI system's pivotal role in causing significant harm.
Thumbnail Image

AI at war: Five things to know about Project Maven

2026-04-05
Yahoo News
Why's our monitor labelling this an incident or hazard?
Project Maven is explicitly described as an AI system that processes sensor data and intelligence to identify targets and assist in strike decisions. Its use in recent US military strikes against Iran, which have caused civilian casualties including children, demonstrates direct harm resulting from the AI system's deployment. The article details realized harm (civilian deaths) linked to the AI-assisted targeting system, meeting the definition of an AI Incident. The involvement of AI in the development and use phases, and the direct link to harm, justify classification as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

AI at war l What to know about Project Maven

2026-04-06
The Hindu
Why's our monitor labelling this an incident or hazard?
Project Maven is explicitly described as an AI system used for military targeting and battlefield management. Its use has directly contributed to US strikes against Iran, which have caused civilian casualties, including children. This constitutes injury or harm to groups of people (harm category a). The AI system's role is pivotal in accelerating the kill chain and enabling these strikes. Hence, this qualifies as an AI Incident due to the direct link between the AI system's use and realized harm to human life in a conflict setting.
Thumbnail Image

Project Maven: How the US military is using AI to find and hit targets

2026-04-05
ArabianBusiness.com
Why's our monitor labelling this an incident or hazard?
Project Maven is an AI system actively used by the US military for targeting and strike planning, integrating AI-driven sensor fusion and natural language models. The system's deployment in military strikes inherently involves harm to persons, fulfilling the definition of an AI Incident. Although the article does not specify particular incidents of malfunction or misuse, the operational use of AI in lethal targeting meets the criteria for an AI Incident because the AI system's use directly leads to injury or harm to people. The article also mentions supply chain risks and vendor changes but these do not negate the fact that the AI system is currently used in operations causing harm. Therefore, the event is best classified as an AI Incident.
Thumbnail Image

AI at war: Five things to know about Project Maven

2026-04-05
Mountain Democrat
Why's our monitor labelling this an incident or hazard?
Project Maven is explicitly described as an AI system that processes sensor data and intelligence to assist in targeting decisions. The article links its use to US military strikes in Iran, including one that reportedly killed 168 children, indicating direct harm to people and communities. The AI system's involvement in accelerating targeting and firing processes makes it a contributing factor to these harms. Therefore, this event qualifies as an AI Incident due to the realized harm caused by the AI system's use in lethal military operations.
Thumbnail Image

AI Central in US-Iran 2026 War as Targeting Systems Speed Strikes but Spark Accuracy and Ethics

2026-04-05
International Business Times AU
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems used in targeting and strike decision-making that have led to significant casualties, including civilian deaths, which constitutes harm to people and communities. The AI system's role is pivotal in enabling rapid strikes and influencing lethal decisions, even though humans retain final authority. The reported 60% accuracy and the tragic strike on a school demonstrate malfunction or limitations of the AI system contributing to harm. The event meets the criteria for an AI Incident as the AI system's use has directly led to injury and harm, and violations of human rights are implied through civilian casualties and ethical concerns raised.
Thumbnail Image

Artificial Intelligence and war: 5 things to know about Maven Smart System

2026-04-05
El-Balad.com
Why's our monitor labelling this an incident or hazard?
The Maven Smart System is an AI system that processes intelligence data to recommend military targets, directly influencing strike decisions. The article reports that since February, over 11,000 strikes in the war with Iran were reportedly identified by Maven, indicating the AI's outputs have been used operationally. The potential for civilian harm due to reduced verification time is a direct harm linked to the AI system's use. Although the company emphasizes human-in-the-loop decision-making, the AI's pivotal role in accelerating targeting decisions and the associated risks of harm to civilians meet the criteria for an AI Incident under the OECD framework, specifically harm to people and communities resulting from the AI system's use.
Thumbnail Image

Pentagon's Project Maven gains prominence as AI backbone in U.S. strikes on Iran

2026-04-06
crypto.news
Why's our monitor labelling this an incident or hazard?
Project Maven is explicitly described as an AI system that processes surveillance data and assists in targeting decisions, thus directly influencing military strikes. The article reports that strikes facilitated by this system have caused deaths and injuries, including a strike on a school with over a hundred children killed. This is a clear case of injury and harm to groups of people caused by the use of an AI system, meeting the criteria for an AI Incident. The harm is realized and directly linked to the AI system's use in military operations.