AI-Driven Targeting in Iran Leads to Civilian Harm and Raises Global Concerns

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

The United States and Israel used advanced AI systems, including Project Maven, to rapidly identify and attack over a thousand targets in Iran, resulting in civilian casualties and the death of Iran's supreme leader. Reports highlight that algorithmic errors in AI-driven targeting accelerated attacks and contributed to wrongful strikes on civilian sites.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly describes the use of AI systems in military targeting and attack execution, which directly led to harm including civilian deaths and destruction. The involvement of AI in accelerating decision-making and target selection is clear, and the reported errors in AI algorithms plausibly caused wrongful attacks on civilian sites. These outcomes constitute injury and harm to people and potential violations of human rights, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, and the AI systems' role is pivotal in the chain of events leading to these harms.[AI generated]
AI principles
SafetyRobustness & digital security

Industries
Government, security, and defence

Affected stakeholders
General publicGovernment

Harm types
Physical (death)Human or fundamental rights

Severity
AI incident

AI system task:
Recognition/object detectionGoal-driven organisation


Articles about this incident or hazard

Thumbnail Image

Guerra en Irán: Uso de IA agilizó ataques contra la República Islámica; advierten riesgos por errores algorítmicos

2026-03-05
El Universal
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the use of AI systems in military targeting and attack execution, which directly led to harm including civilian deaths and destruction. The involvement of AI in accelerating decision-making and target selection is clear, and the reported errors in AI algorithms plausibly caused wrongful attacks on civilian sites. These outcomes constitute injury and harm to people and potential violations of human rights, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, and the AI systems' role is pivotal in the chain of events leading to these harms.
Thumbnail Image

Uso de IA en ofensiva contra Irán genera alerta por posibles fallas en selección de objetivos

2026-03-06
Gestión
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems being used in offensive military operations, including target selection and attack execution, which have resulted in civilian deaths and destruction. The harm to civilians and potential misidentification of targets due to AI errors constitute injury and harm to groups of people and harm to communities. Therefore, this qualifies as an AI Incident because the AI system's use has directly or indirectly led to significant harm. The presence of AI in the decision-making chain and the resulting casualties meet the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

El uso de IA en ataques a Irán preocupa por posibles errores en la selección de objetivos

2026-03-05
Diario La Tribuna
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems used in military targeting operations that have directly resulted in civilian deaths and destruction, fulfilling the criteria for an AI Incident. The AI systems' use in selecting targets and accelerating attacks has led to real harm (civilian casualties), and errors in AI decision-making plausibly contributed to wrongful targeting. This constitutes injury and harm to people and communities, as well as potential violations of human rights. The AI system's development and use are central to the event, and the harm is realized, not just potential. Hence, the classification as AI Incident is appropriate.
Thumbnail Image

El uso de IA en ataques a Irán preocupa por posibles errores en la selección de objetivos

2026-03-05
Noticias SIN
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly used for target identification and attack execution in military operations. The article describes actual harm resulting from these AI-driven decisions, including civilian deaths and misidentification of non-military sites as targets. This meets the criteria for an AI Incident because the AI's use has directly and indirectly led to injury and harm to people. The involvement of AI in accelerating and automating lethal decisions with insufficient human oversight further supports this classification.
Thumbnail Image

El uso de IA en ataques a Irán preocupa por posibles errores en la selección de objetivos

2026-03-06
UDG TV
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Project Maven) used operationally to identify military targets, which has directly influenced attacks resulting in harm or risk of harm to civilians. The article reports actual use and consequences, including a mistaken target that could cause civilian harm. This meets the definition of an AI Incident, as the AI system's use has directly or indirectly led to harm to persons and communities. The presence of human review does not negate the AI's pivotal role in accelerating and enabling these attacks. Therefore, the event is classified as an AI Incident.
Thumbnail Image

El uso de IA en ataques a Irán preocupa por posibles errores en la selección de objetivos

2026-03-06
El Comercio Perú
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems being used in the selection and prioritization of military targets, with documented tragic outcomes including civilian deaths. The AI's role in accelerating decision-making and potential errors causing harm to civilians fits the definition of an AI Incident, as the AI system's use has directly led to injury and harm to groups of people. The presence of AI in the operational chain and the resulting harm to human life and communities confirms this classification.
Thumbnail Image

¿Por qué preocupa el uso de la IA en los ataques a Irán por parte de EU e Israel?

2026-03-06
Vanguardia
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems being used in the development and use phases for military targeting and attack execution. The involvement of AI has directly contributed to harm, including civilian deaths and potential wrongful targeting due to algorithmic errors or insufficient human supervision. These harms fall under injury to persons and harm to communities, which are recognized AI Incident categories. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Uso de IA en ataques a Irán: ¿Riesgo de tragedias civiles?

2026-03-07
7dias.com.do
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems being used in military targeting and attack operations, which have resulted in civilian deaths, fulfilling the criteria for an AI Incident. The harm is direct and significant (loss of civilian lives), and the AI's role in accelerating and possibly causing errors in targeting is central to the event. This meets the definition of an AI Incident as the AI system's use has directly led to harm to people and communities.
Thumbnail Image

AI at war: 5 things to know about Project Maven

2026-04-06
Dawn
Why's our monitor labelling this an incident or hazard?
Project Maven is explicitly described as an AI system used for targeting and battlefield management that has been operational and involved in US strikes against Iran. The AI system's outputs directly influence lethal military decisions, leading to harm to persons and communities, fulfilling the criteria for an AI Incident. The article discusses the development, use, and ethical controversies surrounding the AI system, confirming its pivotal role in causing harm. Hence, it is not merely a hazard or complementary information but an incident involving realized harm linked to AI.
Thumbnail Image

Operation Epic Fury uses AI battlefield management to hit hundreds of targets in hours

2026-04-05
GEO TV
Why's our monitor labelling this an incident or hazard?
Project Maven is explicitly described as an AI system that processes sensor and satellite data to identify and prioritize targets for strikes. The system's involvement in the US military strikes against Iran, which resulted in the deaths and injuries of civilians including children, shows direct harm caused by the AI system's use. This meets the definition of an AI Incident as the AI system's use has directly led to injury and harm to groups of people. The article provides concrete examples of harm (civilian casualties) linked to the AI-assisted targeting, confirming the classification as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Detection To Destruction: Pentagon's 'Project Maven' Is AI-Assistant In War

2026-04-05
NDTV
Why's our monitor labelling this an incident or hazard?
Project Maven is explicitly described as an AI system used in military targeting and battlefield management. Its use has directly contributed to military strikes causing loss of life, including civilian casualties, which constitutes harm to people. The AI system's role in accelerating the kill chain and selecting targets is pivotal to these harms. Therefore, this event qualifies as an AI Incident due to the realized harm caused by the AI system's deployment in warfare.
Thumbnail Image

How The Pentagon Is Using AI To Reshape Modern Warfare: Project Maven Explained

2026-04-06
News18
Why's our monitor labelling this an incident or hazard?
Project Maven is explicitly described as an AI system that processes surveillance data and supports targeting decisions in warfare. The article reports actual harm resulting from strikes where the system likely played a central role, including civilian casualties. This constitutes direct harm to people caused by the use of an AI system, meeting the definition of an AI Incident. The ethical debates and operational details further confirm the AI system's pivotal role in causing significant harm.
Thumbnail Image

AI at war: Five things to know about Project Maven

2026-04-05
Yahoo News
Why's our monitor labelling this an incident or hazard?
Project Maven is explicitly described as an AI system that processes sensor data and intelligence to identify targets and assist in strike decisions. Its use in recent US military strikes against Iran, which have caused civilian casualties including children, demonstrates direct harm resulting from the AI system's deployment. The article details realized harm (civilian deaths) linked to the AI-assisted targeting system, meeting the definition of an AI Incident. The involvement of AI in the development and use phases, and the direct link to harm, justify classification as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

AI at war l What to know about Project Maven

2026-04-06
The Hindu
Why's our monitor labelling this an incident or hazard?
Project Maven is explicitly described as an AI system used for military targeting and battlefield management. Its use has directly contributed to US strikes against Iran, which have caused civilian casualties, including children. This constitutes injury or harm to groups of people (harm category a). The AI system's role is pivotal in accelerating the kill chain and enabling these strikes. Hence, this qualifies as an AI Incident due to the direct link between the AI system's use and realized harm to human life in a conflict setting.
Thumbnail Image

Project Maven: How the US military is using AI to find and hit targets

2026-04-05
ArabianBusiness.com
Why's our monitor labelling this an incident or hazard?
Project Maven is an AI system actively used by the US military for targeting and strike planning, integrating AI-driven sensor fusion and natural language models. The system's deployment in military strikes inherently involves harm to persons, fulfilling the definition of an AI Incident. Although the article does not specify particular incidents of malfunction or misuse, the operational use of AI in lethal targeting meets the criteria for an AI Incident because the AI system's use directly leads to injury or harm to people. The article also mentions supply chain risks and vendor changes but these do not negate the fact that the AI system is currently used in operations causing harm. Therefore, the event is best classified as an AI Incident.
Thumbnail Image

AI at war: Five things to know about Project Maven

2026-04-05
Mountain Democrat
Why's our monitor labelling this an incident or hazard?
Project Maven is explicitly described as an AI system that processes sensor data and intelligence to assist in targeting decisions. The article links its use to US military strikes in Iran, including one that reportedly killed 168 children, indicating direct harm to people and communities. The AI system's involvement in accelerating targeting and firing processes makes it a contributing factor to these harms. Therefore, this event qualifies as an AI Incident due to the realized harm caused by the AI system's use in lethal military operations.
Thumbnail Image

AI Central in US-Iran 2026 War as Targeting Systems Speed Strikes but Spark Accuracy and Ethics

2026-04-05
International Business Times AU
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems used in targeting and strike decision-making that have led to significant casualties, including civilian deaths, which constitutes harm to people and communities. The AI system's role is pivotal in enabling rapid strikes and influencing lethal decisions, even though humans retain final authority. The reported 60% accuracy and the tragic strike on a school demonstrate malfunction or limitations of the AI system contributing to harm. The event meets the criteria for an AI Incident as the AI system's use has directly led to injury and harm, and violations of human rights are implied through civilian casualties and ethical concerns raised.
Thumbnail Image

Artificial Intelligence and war: 5 things to know about Maven Smart System

2026-04-05
El-Balad.com
Why's our monitor labelling this an incident or hazard?
The Maven Smart System is an AI system that processes intelligence data to recommend military targets, directly influencing strike decisions. The article reports that since February, over 11,000 strikes in the war with Iran were reportedly identified by Maven, indicating the AI's outputs have been used operationally. The potential for civilian harm due to reduced verification time is a direct harm linked to the AI system's use. Although the company emphasizes human-in-the-loop decision-making, the AI's pivotal role in accelerating targeting decisions and the associated risks of harm to civilians meet the criteria for an AI Incident under the OECD framework, specifically harm to people and communities resulting from the AI system's use.
Thumbnail Image

Pentagon's Project Maven gains prominence as AI backbone in U.S. strikes on Iran

2026-04-06
crypto.news
Why's our monitor labelling this an incident or hazard?
Project Maven is explicitly described as an AI system that processes surveillance data and assists in targeting decisions, thus directly influencing military strikes. The article reports that strikes facilitated by this system have caused deaths and injuries, including a strike on a school with over a hundred children killed. This is a clear case of injury and harm to groups of people caused by the use of an AI system, meeting the criteria for an AI Incident. The harm is realized and directly linked to the AI system's use in military operations.
Thumbnail Image

Palantir's AI Powers US Strikes in Iran War, Speeding 'Kill Chain' in First Major AI-Driven Conflict

2026-04-08
International Business Times AU
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system (Palantir's Maven Smart System) used in military targeting that has directly led to harm, including civilian casualties from a misidentified target. The AI system's role in accelerating and scaling strikes is central, and the harm (loss of civilian life) is a direct consequence of its use. This meets the definition of an AI Incident because the AI system's use has directly led to injury and harm to groups of people. The ethical and oversight concerns further support the classification. Although human commanders retain final decision authority, the AI system's recommendations are pivotal in the targeting process and thus causally linked to the harm.
Thumbnail Image

What is Project Maven? Here's how Pentagon is using AI to reshape modern warfare amid Iran war, its main purpose is to...

2026-04-07
News24
Why's our monitor labelling this an incident or hazard?
The article describes an AI system (Project Maven) actively used by the military to analyze surveillance data and assist in identifying threats, which directly relates to military operations and potentially warfare. The use of AI in this context involves the processing of sensitive data to make decisions that could influence physical environments and conflict outcomes. Although the article does not describe a specific harm or incident caused by the system, the deployment of AI in military targeting and surveillance inherently involves risks of harm, including injury or harm to persons during warfare. Given the system is operational and used in conflict contexts, this constitutes an AI Incident due to the direct involvement of AI in military decision-making with potential for harm. The ethical concerns and protests further highlight the significance of the AI system's role in causing societal and ethical harms related to human rights and warfare.
Thumbnail Image

US-Iran-Israel War Latest News: What is Project Maven? Here's how Pentagon is using AI to reshape modern warfare amid Iran war, its main purpose is to...

2026-04-07
News24
Why's our monitor labelling this an incident or hazard?
Project Maven is clearly an AI system used in military contexts, involving AI development and use. However, the article does not mention any actual harm, injury, violation of rights, or disruption caused by the system. It discusses the system's purpose, capabilities, and ethical concerns raised by employees, but no incident or hazard of harm is described. Therefore, the article fits best as Complementary Information, providing context and societal response to an AI system without reporting an AI Incident or AI Hazard.
Thumbnail Image

War, accelerated: Inside Pentagon's battle machine

2026-04-07
Daily Tribune
Why's our monitor labelling this an incident or hazard?
Project Maven is an AI system explicitly described as integrating multiple data sources to identify targets and recommend strikes, which directly impacts physical harm in conflict zones. The AI's role in accelerating lethal decisions means it is directly involved in causing injury or harm to people. Therefore, this event qualifies as an AI Incident due to the direct link between the AI system's use and harm in warfare.
Thumbnail Image

Latest AI news: Pentagon's AI hit 1,000 targets

2026-04-07
crypto.news
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system used in military targeting that directly led to harm—specifically, a strike causing over 165 civilian deaths. The AI system's role in generating target lists and legal justifications was pivotal in the incident. The harm is realized and significant, involving injury and loss of life, and the event raises questions about legal and ethical accountability. This fits the definition of an AI Incident, as the AI system's use directly led to harm to people and potential violations of human rights.