AI-Assisted Targeting by Project Maven Leads to Civilian Deaths in Iran

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

The AI system Project Maven, developed by Palantir and used by the US and Israeli militaries, played a central role in accelerating battlefield decisions and target selection in Iran. Its algorithmic recommendations contributed to a mistaken attack on a school in Minab, resulting in over 175 civilian deaths.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions the use of AI systems by the US military to process data and support targeting decisions in an ongoing conflict that has caused substantial civilian casualties and destruction. The AI tools are integral to the military operations that have led to harm to people and property, fulfilling the criteria for an AI Incident. Although humans make final decisions, the AI's role in accelerating and informing those decisions directly contributes to the harm. Therefore, this event is classified as an AI Incident.[AI generated]
AI principles
Respect of human rightsSafety

Industries
Government, security, and defence

Affected stakeholders
General publicChildren

Harm types
Physical (death)

Severity
AI incident

Business function:
Other

AI system task:
Recognition/object detection


Articles about this incident or hazard

Thumbnail Image

Congress calls for oversight after reports US military is using AI in Iran War

2026-03-11
The Independent
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems being used in military operations to assist in target identification and prioritization, indicating AI system involvement. The harm described (civilian casualties from air strikes) is real and significant, but the article states it is unknown whether AI contributed to the unintentional bombing. Since AI is used in the process but humans make final decisions, and no direct link between AI malfunction or misuse and harm is confirmed, this does not meet the threshold for an AI Incident. However, the use of AI in lethal targeting with potential for harm and calls for oversight indicate a plausible risk of AI-related harm, qualifying this as an AI Hazard.
Thumbnail Image

US military confirms use of 'advanced AI tools' in war against Iran

2026-03-11
Al Jazeera Online
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI systems by the US military to process data and support targeting decisions in an ongoing conflict that has caused substantial civilian casualties and destruction. The AI tools are integral to the military operations that have led to harm to people and property, fulfilling the criteria for an AI Incident. Although humans make final decisions, the AI's role in accelerating and informing those decisions directly contributes to the harm. Therefore, this event is classified as an AI Incident.
Thumbnail Image

US military confirms use of 'advanced AI tools' in war against Iran

2026-03-11
Yahoo
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI systems by the US military to process data and aid in targeting decisions during a conflict that has caused substantial civilian deaths and damage to civilian infrastructure. The AI systems are directly involved in the use phase, supporting decisions that have led to harm to people and property, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, and the AI's role is pivotal in accelerating decision-making in lethal operations. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

US confirms use of 'advanced AI tools' amid debate if AI error led to deadly attack on Iran school

2026-03-12
TRT World
Why's our monitor labelling this an incident or hazard?
The article explicitly confirms the use of advanced AI tools by CENTCOM in the war against Iran, including in processing intelligence data for targeting. The bombing of the school, causing 175 deaths, is linked to a targeting failure involving outdated intelligence, which was likely processed or influenced by AI systems. The harm is realized and severe (loss of civilian lives), and the AI system's involvement in the intelligence process is a contributing factor to the incident. Although humans made the final decision, the AI tools' role in generating targeting data that led to the strike makes this an AI Incident under the framework's criteria.
Thumbnail Image

How corporations have collaborated with US military over the decades

2026-03-12
Al Jazeera Online
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems being used by the US military in active conflict zones, including AI tools that assist in targeting and intelligence operations. It references concrete harms such as the abduction of a political leader and the ongoing war in Gaza with massive casualties and destruction, where AI systems are implicated in the conduct of military operations. The involvement of AI in these harms is direct or indirect, as AI tools support decision-making and operational capabilities that lead to injury, death, and violations of human rights. Hence, this qualifies as an AI Incident under the framework, as the AI system's use has directly or indirectly led to harm to persons and communities.
Thumbnail Image

Question for Hegseth: Did US Military Rely on AI Targeting for Bombing of Iranian School? | Common Dreams

2026-03-12
Common Dreams
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI tools by the US military in targeting decisions, including systems from Palantir and Anthropic's Claude AI. The bombing of a civilian school resulting in mass casualties is a clear harm to persons and communities. The AI system's role in target identification and decision-making is central to the incident, and the letter from lawmakers questions the extent of AI involvement and human oversight. Given the direct link between AI-assisted targeting and the resulting civilian deaths, this event meets the criteria for an AI Incident.
Thumbnail Image

Iran war is a harbinger of future AI-powered warfare

2026-03-13
Washington Examiner
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (Claude and Maven) used in military targeting and decision-making, which have directly influenced the selection and prioritization of targets in an active war zone. The resulting military strikes have caused destruction and harm, including the accidental bombing of a school, which, although attributed to human error, occurred within the context of AI-accelerated operations. The AI system's role in speeding up targeting and enabling real-time operations is pivotal to the harm caused. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly or indirectly led to harm to people and communities in the conflict.
Thumbnail Image

Artificial Intelligence Is Already Making War More Horrific

2026-03-13
jacobin.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems used in military targeting that contributed to a bombing causing harm to civilians, which is a direct harm to people and communities. Although humans make the final decision, the AI's role in shaping targeting decisions is pivotal and has led to injury and death. This fits the definition of an AI Incident because the AI system's use has directly or indirectly led to harm to persons and communities in a conflict setting.
Thumbnail Image

AI's "Deadly Debut" in Iran War: US Struck 1,000+ Targets as China Warns of 'Terminator-Like' Future

2026-03-14
TFIGlobal
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems being used in military targeting decisions that led to a large-scale bombing campaign with reported civilian casualties, including the death of over 150 children. The harm to human life is direct and significant, and the AI systems played a pivotal role in accelerating and prioritizing targets. Although it is not fully confirmed if AI directly caused the specific tragic strike, the AI's involvement in the targeting process and the resulting civilian harm meets the criteria for an AI Incident. The event is not merely a potential risk or a governance discussion but involves realized harm linked to AI use in warfare.
Thumbnail Image

La IA entra por primera vez en una guerra: así es Palantir, la pieza central de EE. UU. en la ofensiva contra Irán

2026-03-13
HERALDO
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system (Project Maven) in military operations that led to a significant harm event: the mistaken bombing of a school causing over 175 deaths. The AI system's role in target suggestion and prioritization directly influenced the harm, even if humans had final decision authority. This fits the definition of an AI Incident, as the AI system's use indirectly led to injury and harm to people. The presence of AI is clear, the harm is realized, and the causal link is established through the accelerated and algorithmically mediated targeting process.
Thumbnail Image

Así funciona la inteligencia artificial de EEUU en la guerra de Irán: revisa hasta 80 objetivos por hora y recomienda dónde bombardear

2026-03-15
LaSexta
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as analyzing data to select military targets and recommend attacks. The AI's use directly led to significant harm—over 175 deaths from a bombing caused by an erroneous target selection. This fits the definition of an AI Incident, as the AI system's use directly caused injury and harm to people. The harm is materialized and significant, and the AI system's role is pivotal in the incident.
Thumbnail Image

Maven, la IA de Palantir, es una pieza central en la ofensiva contra Irán y del futuro de las guerras

2026-03-14
Vanguardia
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Maven) explicitly described as being used in military operations to select and prioritize targets. The AI system's outputs have directly contributed to a fatal incident involving civilian casualties, fulfilling the criteria for harm to people. The article details the AI's role in accelerating decisions and generating target lists, which led to a mistaken attack on a civilian school. This is a direct link between AI use and harm, qualifying the event as an AI Incident. The involvement of AI in military targeting and the resulting civilian deaths meet the definition of an AI Incident due to injury and harm to people and potential violations of human rights.
Thumbnail Image

¿Qué es Maven, la IA de Palantir, pieza central de la ofensiva de EE.UU. e Israel en Irán? - Proceso Digital

2026-03-14
Proceso Hn
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system (Maven) in military operations that have resulted in civilian deaths, which constitutes harm to people and communities. The AI system's role in target selection and prioritization directly influenced these harms, fulfilling the criteria for an AI Incident. The involvement is through the use of the AI system in operational decision-making, and the harm is realized, not just potential. Hence, the event is classified as an AI Incident.
Thumbnail Image

Proyecto Maven: IA en la guerra de EE.UU. e Israel

2026-03-13
7dias.com.do
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Proyecto Maven) used in military operations for target selection and decision acceleration. It reports a concrete harm event—the attack on a school causing over 175 deaths—likely linked to errors in AI-assisted target selection. This constitutes indirect harm caused by the AI system's use. The involvement of AI in the development and use phases, and the resulting civilian casualties, meet the criteria for an AI Incident. Although humans make final firing decisions, the AI system's role in generating and prioritizing targets is pivotal in the harm caused. Hence, the classification as AI Incident is justified.
Thumbnail Image

Irán, primera prueba masiva de la guerra asistida por IA

2026-03-16
MuyComputerPRO
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems being used in real military operations that have resulted in civilian deaths and widespread harm, fulfilling the criteria for an AI Incident. The AI systems are involved in the use phase, assisting in decision-making that has directly or indirectly led to injury and harm to people. The presence of AI is clear, with named systems and models, and the harm is materialized, not hypothetical. Although human validation is claimed, the AI's role in providing data and recommendations that influenced lethal actions is pivotal. The article also discusses ethical and legal concerns, investigations, and political scrutiny, reinforcing the classification as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

"Kann Gegenmaßnahmen empfehlen": Bundeswehr will den KI-Einsatz vorantreiben, um schneller Strategien gegen Feinde zu entwickeln

2026-03-31
Focus
Why's our monitor labelling this an incident or hazard?
The article describes the planned or ongoing use of AI systems for military strategic support, with no mention of any realized harm or malfunction. The concerns about data access and the controversial nature of the AI tool Maven relate to governance and privacy issues but do not describe an AI Incident or a plausible AI Hazard event. Therefore, this is best classified as Complementary Information providing context on AI use in military settings and related governance considerations.
Thumbnail Image

Palantir: Wie das "Gehirn" der Tötungskette dem US-Militär hilft

2026-03-27
Süddeutsche Zeitung
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses an AI system (Palantir's Maven Smart System) used by the US military for target identification and operational planning, which has been involved in real military actions causing loss of life, including a deadly attack on a school in Iran. This constitutes direct harm to people (harm category a). The AI system's role in accelerating and supporting lethal decisions, even if not fully transparent, is pivotal to the harm. The article also highlights concerns about decision-makers' inability to understand AI recommendations, increasing the risk of wrongful targeting. These factors meet the criteria for an AI Incident, as the AI system's use has directly or indirectly led to significant harm to persons and potential violations of rights.
Thumbnail Image

Mutmaßlicher US-Angriff auf Schule in Iran: Palantir-System rückt in den Fokus

2026-03-30
heise online
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system (Palantir's Maven Smart System) in the military targeting process. The system's reliance on outdated data and its role in accelerating the kill-chain without sufficient human verification directly contributed to the wrongful bombing of a civilian school, causing significant loss of life. This constitutes an AI Incident because the AI system's malfunction and use directly led to injury and harm to people. The event is not merely a potential hazard or complementary information but a concrete incident with realized harm linked to AI system use.
Thumbnail Image

Palantir Aktie: Positiver Marktausblick!

2026-03-29
Börse Express
Why's our monitor labelling this an incident or hazard?
Palantir's Maven is an AI system used by the military to analyze data and identify targets, which directly relates to military operations that can cause harm. The article describes its operational deployment and integration into the US Army's permanent programs, implying ongoing use with potential for harm. Although no specific incident of harm is described, the system's use in conflict zones and target identification inherently involves risks of injury or harm to persons and communities. Therefore, this event represents an AI Hazard, as the AI system's use could plausibly lead to harm, but no specific harm event is reported in the article.
Thumbnail Image

Palantir: Vom CIA-Inkubator zum militärischen Machtfaktor

2026-03-27
uncut-news.ch
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses an AI system (Maven) developed and used by Palantir for military target identification and command and control, which has been involved in thousands of targeted strikes. This constitutes direct use of AI in lethal operations, which can cause injury or harm to persons (harm category a). The article also references ethical and legal concerns about AI in lethal decision-making, underscoring the risk of harm. The AI system's role is pivotal in these military operations, and the institutionalization of Maven as a Program of Record ensures its continued use. Hence, this is an AI Incident, not merely a hazard or complementary information, as harm is ongoing or has occurred through AI-enabled military actions.
Thumbnail Image

Project Maven, l'IA au coeur de la guerre en Iran

2026-04-05
Blick.ch
Why's our monitor labelling this an incident or hazard?
Project Maven is explicitly described as an AI system used for military targeting and battlefield management. The article links its use to a military campaign against Iran, where a large number of strikes occurred rapidly, including one that caused civilian casualties (a school strike). This constitutes injury and harm to people and communities, fulfilling the criteria for an AI Incident. The AI system's role in accelerating targeting and strike decisions is pivotal to the harm described. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Intelligence artificielle, contrôle aérien... Qu'est-ce que le "Project Maven" du Pentagone ?

2026-04-05
CNEWS
Why's our monitor labelling this an incident or hazard?
Project Maven is explicitly described as an AI system used in military targeting and battlefield management that has accelerated the kill chain process, leading to numerous strikes in the Iran conflict. The AI system's outputs directly influence lethal decisions and actions, which inherently cause harm to people and property. The article indicates that the AI system's use has materially contributed to these harms, fulfilling the criteria for an AI Incident. Ethical concerns and employee protests further underscore the recognized risks and harms associated with this AI system's deployment in warfare.
Thumbnail Image

L'IA au coeur de la guerre en Iran: cinq choses à savoir sur le "Project Maven" du Pentagone

2026-04-05
LaProvence.com
Why's our monitor labelling this an incident or hazard?
Project Maven is explicitly described as an AI system used for military targeting and battlefield management. Its use has accelerated the process of identifying and striking targets, which has resulted in lethal strikes, including one that hit a school causing deaths. This constitutes direct harm to persons and communities caused by the AI system's deployment. The article also mentions an ongoing Pentagon investigation into the incident, confirming the seriousness of the harm. Hence, the event meets the criteria for an AI Incident due to direct harm caused by the AI system's use in military operations.
Thumbnail Image

L'IA au coeur de la guerre en Iran : Que faut-il savoir sur le "Project Maven" du Pentagone ?

2026-04-05
La Libre.be
Why's our monitor labelling this an incident or hazard?
Project Maven is explicitly described as an AI-assisted targeting and battlefield management system that accelerates the kill chain from detection to destruction. Its use in military strikes, including one that reportedly hit a school causing deaths, constitutes direct harm to people. The AI system's role in enabling rapid targeting and execution of strikes links it causally to the harm. Therefore, this event qualifies as an AI Incident due to injury or harm to persons resulting from the AI system's use in military operations.
Thumbnail Image

" Comme par magie " : qu'est-ce que Maven, cet outil dont l'armée américaine ne peut plus se passer ?

2026-04-05
RTL Info
Why's our monitor labelling this an incident or hazard?
Maven is an AI system used operationally for military targeting and strike decisions. The article links its use to a lethal strike that hit a civilian school, implying injury or harm to people. The AI system's role in accelerating targeting and firing processes makes it a contributing factor to this harm. Therefore, this event qualifies as an AI Incident due to direct or indirect harm caused by the AI system's use in military operations.
Thumbnail Image

Pourquoi l'engouement du Pentagone pour les armes de guerre boostées à l'IA devrait inquiéter le monde ?

2026-04-03
Sciencepost
Why's our monitor labelling this an incident or hazard?
The article clearly describes the use of AI systems in military operations that have directly led to harm through targeted strikes, fulfilling the criteria for an AI Incident. The AI system (Project Maven and its AI components) is explicitly involved in decision-making processes for lethal actions, which have already been deployed in conflict. The concerns about ethical issues and potential catastrophic consequences further support the classification as an AI Incident rather than a mere hazard or complementary information. Therefore, this event is best classified as an AI Incident due to the realized harm caused by AI-enabled autonomous weapons systems in warfare.
Thumbnail Image

Comment l'IA est au coeur de la guerre en Iran

2026-04-05
Radio Canada
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system (Project Maven) in military targeting and strike operations. It details how the AI system accelerates the kill chain and is involved in selecting targets. The reported lethal strike on a school, likely caused by the targeting process involving this AI system, indicates direct harm to people and communities. This meets the criteria for an AI Incident, as the AI system's use has directly led to injury and harm. The article also discusses ethical concerns and the involvement of major AI companies, but the core event is the AI system's operational use causing harm in warfare.
Thumbnail Image

L'IA au coeur de la guerre en Iran: cinq choses à savoir sur le "Project Maven" du Pentagone

2026-04-05
TV5MONDE
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Project Maven) used in military targeting and battlefield management. The system's use has directly contributed to lethal strikes, including one that hit a school causing deaths, which constitutes injury and harm to people and communities. The AI system's role in accelerating targeting and firing processes is pivotal to these harms. Hence, this is an AI Incident as the AI system's use has directly led to significant harm.
Thumbnail Image

Así es 'Project Maven', la IA que actúa como cerebro digital en la guerra de Estados Unidos contra Irán

2026-04-06
www.elcolombiano.com
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system ('Project Maven') used in military operations that has contributed to the selection and engagement of targets, leading to lethal outcomes including civilian casualties. The AI system's role in accelerating the 'kill chain' and its operational deployment in conflict with Iran directly links it to harm to persons and communities, fulfilling the criteria for an AI Incident. The investigation into the strike on a school further confirms the occurrence of harm associated with the AI system's use.
Thumbnail Image

'ChatGPT da guerra'? Como os EUA usam IA em ofensiva contra o Irã | Exame

2026-04-06
Exame
Why's our monitor labelling this an incident or hazard?
Project Maven is an AI system explicitly described as being used to accelerate military decisions, including target identification and attack execution. The system's use has directly led to military strikes, including one that hit a school, implying injury or harm to people and communities. The AI system's involvement in lethal operations and the resulting harm meet the criteria for an AI Incident, as the harm is realized and the AI system's role is pivotal in the harm caused.
Thumbnail Image

"Project Maven", el programa de IA que EEUU utiliza en la guerra contra Irán

2026-04-06
France 24
Why's our monitor labelling this an incident or hazard?
Project Maven is explicitly described as an AI system used in military targeting and battlefield management. Its use in the conflict with Iran has led to attacks, including one that caused civilian casualties (the school attack). The AI system's role in accelerating the kill chain and target selection directly links it to harm to people and communities. The article's mention of an investigation into the attack further supports the occurrence of harm. Hence, this is an AI Incident involving direct harm caused or facilitated by an AI system.
Thumbnail Image

"Project Maven", el programa de IA que EU utiliza en la guerra contra Irán

2026-04-06
El Economista
Why's our monitor labelling this an incident or hazard?
Project Maven is explicitly described as an AI system used in military targeting and battlefield management. The article links its use to a military operation where over a thousand targets were struck, including a fatal attack on a school. This constitutes direct or indirect harm to people and communities caused by the AI system's deployment. Therefore, this event qualifies as an AI Incident under the framework, as the AI system's use has led to realized harm.
Thumbnail Image

Qué es Project Maven, el sistema de IA clave en la ofensiva militar contra Irán

2026-04-06
EL UNIVERSO
Why's our monitor labelling this an incident or hazard?
Project Maven is explicitly described as an AI system that processes imagery and assists in target selection and attack execution, thus fulfilling the AI System definition. The article links its use to a military offensive with documented civilian harm (the school attack causing deaths), which constitutes injury or harm to people, fulfilling the criteria for an AI Incident. The AI system's role is pivotal in accelerating the kill chain and enabling rapid strikes, making it a direct or indirect cause of the harm. The Pentagon investigation further confirms the seriousness of the incident. Hence, the event is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Una IA controla la campaña militar de EE.UU. en Irán: ¿cómo actúa? ¿qué resultados obtuvo? - Mundo - ABC Color

2026-04-06
ABC Digital
Why's our monitor labelling this an incident or hazard?
Project Maven is explicitly described as an AI system used in military operations to identify and select targets, thus directly influencing lethal actions. The article reports actual harm resulting from these operations, including civilian casualties from a strike on a school, which is under investigation. The AI system's role in accelerating and managing the targeting process makes it a contributing factor to these harms. Hence, the event meets the criteria for an AI Incident as the AI system's use has directly or indirectly led to injury and harm to people and communities.
Thumbnail Image

'Project Maven', así funciona el programa de IA que Estados Unidos utiliza en la guerra contra Irán

2026-04-06
La Nación, Grupo Nación
Why's our monitor labelling this an incident or hazard?
Project Maven is explicitly described as an AI system combining satellite and sensor data with automated analysis to guide military operations. The article links its use to a real military campaign with documented lethal outcomes, including a controversial attack causing civilian harm. The AI system's role in accelerating the 'chain of attack' and target selection implies direct involvement in causing harm. Therefore, this event qualifies as an AI Incident due to the realized harm to people and communities resulting from the AI system's use in warfare.
Thumbnail Image

Project Maven: El 'soldado de IA' que Trump tiene listo para atacar las plantas de Irán

2026-04-06
Expansión
Why's our monitor labelling this an incident or hazard?
Project Maven is explicitly described as an AI system that processes satellite imagery to identify targets and assist in planning attacks. The article states that the system has been used in ongoing military operations against Iran, with over a thousand targets hit in the first 24 hours of a specific operation. A deadly attack on a school is linked to these operations, and the Pentagon is investigating. This shows that the AI system's use has directly led to harm to people and property, fulfilling the criteria for an AI Incident. The involvement is in the use of the AI system for lethal military targeting, causing injury and harm to persons and property.
Thumbnail Image

IA do Pentágono acelera ataques na guerra contra o Irã; entenda o 'Project Maven'

2026-04-06
O Globo
Why's our monitor labelling this an incident or hazard?
The Project Maven is an AI system used for military targeting and battlefield management, which has been operational and expanded since 2017. The article links its use to the acceleration of attacks, including one that caused harm to a school, implying injury or harm to people and communities. The AI system's role in identifying targets and guiding attacks is pivotal to these harms. Therefore, this event qualifies as an AI Incident due to the direct or indirect harm caused by the AI system's use in military operations.
Thumbnail Image

Pentágono implementa IA en Project Maven para operaciones militares

2026-04-06
UDG TV
Why's our monitor labelling this an incident or hazard?
Project Maven is an AI system used for military targeting and battlefield management. The article reports a specific incident where a deadly attack on a school occurred during an operation where Maven was likely involved in target selection and attack execution. This constitutes direct or indirect harm to people caused by the AI system's use. Therefore, this event qualifies as an AI Incident due to injury and harm to persons resulting from the AI system's deployment in military operations.
Thumbnail Image

'Project Maven', o programa de IA utilizado pelos EUA na guerra contra o Irã

2026-04-06
UOL notícias
Why's our monitor labelling this an incident or hazard?
Project Maven is an AI system explicitly mentioned as being used to analyze drone imagery in a military campaign. Its use directly supports military operations that inherently involve harm to people, fulfilling the criteria for injury or harm to persons. Therefore, the event involves the use of an AI system that has directly led to harm, classifying it as an AI Incident.
Thumbnail Image

Como funciona o 'Project Maven', a IA dos EUA usada contra o Irã

2026-04-07
R7 Notícias
Why's our monitor labelling this an incident or hazard?
Project Maven is an AI system developed to process drone imagery and integrate battlefield data to assist in targeting and operational decisions. Its deployment in active military operations against Iran and other regions involves direct use of AI in contexts where harm to persons, property, and communities is a foreseeable and intended outcome. The article details the system's operational role and the associated threats of military action, indicating realized or imminent harm linked to AI use. Hence, this qualifies as an AI Incident due to the direct involvement of AI in causing or enabling harm in warfare.
Thumbnail Image

EUA usam IA como tecnologia de guerra em Projeto Maven

2026-04-07
Portal Tela
Why's our monitor labelling this an incident or hazard?
Project Maven is an AI system explicitly mentioned as being used in military operations to identify targets from drone and satellite imagery. Its use directly affects battlefield decisions that can cause injury or death, fulfilling the criteria for harm to persons or groups. The involvement of AI in lethal autonomous or semi-autonomous weapon systems is a recognized AI Incident due to the direct link between AI outputs and physical harm. Although the article discusses ethical debates and controversies, the primary focus is on the AI system's active deployment and its direct role in warfare, which meets the definition of an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Olhar do Amanhã: como os EUA usam a IA como tecnologia de guerra

2026-04-09
Olhar Digital - O futuro passa primeiro aqui
Why's our monitor labelling this an incident or hazard?
Project Maven is an AI system used for military targeting, which involves the development and use of AI for lethal autonomous or semi-autonomous weapon systems. This use of AI in warfare has a high potential for harm, including injury or death, and raises significant ethical and legal concerns. Although the article does not report a specific incident of harm occurring, the deployment and use of such AI systems in military operations plausibly lead to harm, including injury or death, making this an AI Hazard rather than an AI Incident.
Thumbnail Image

"Project Maven": Cómo opera el programa de IA que utiliza EE.UU. en la guerra contra Irán

2026-04-07
El Mercurio de Santiago
Why's our monitor labelling this an incident or hazard?
Project Maven is explicitly described as an AI system used in military operations that has directly influenced the conduct of attacks, leading to harm (loss of life and destruction) in a conflict setting. This fits the definition of an AI Incident because the AI system's use has directly led to harm to people and communities through its role in warfare. The article does not merely discuss potential risks or future harms but reports on actual deployment and impact, thus qualifying as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Project Maven: como a IA tem ajudado os EUA nos ataques contra o Irã?

2026-04-07
TecMundo
Why's our monitor labelling this an incident or hazard?
Project Maven is an AI system explicitly mentioned as being used in military attacks against Iran and other targets. Its use in targeting and managing offensive operations directly leads to harm (injury or death, destruction of property) as part of warfare. Even though specific results are not disclosed, the nature of military attacks inherently involves harm to people and communities. The AI system's development and use in this context directly contribute to these harms, meeting the criteria for an AI Incident. The article does not merely speculate about potential harm but indicates active use in conflict, thus not an AI Hazard or Complementary Information.
Thumbnail Image

Proyecto Maven: qué es el programa que llevó la IA al campo de batalla - Tecnología - ABC Color

2026-04-07
ABC Digital
Why's our monitor labelling this an incident or hazard?
Project Maven is an AI system developed and used by the U.S. military to process large volumes of surveillance data to support operational decisions, including lethal actions. The article discusses the ethical controversy and potential harm arising from the AI's role in accelerating military decision-making that can lead to lethal outcomes. This fits the definition of an AI Incident because the AI system's use has directly or indirectly led to harms related to human rights and ethical breaches. The controversy and resignations further underscore the realized harm and societal impact.
Thumbnail Image

'Project Maven', o programa de IA utilizado pelos EUA na guerra contra o Irã

2026-04-06
Istoe dinheiro
Why's our monitor labelling this an incident or hazard?
Project Maven is an AI system used for military targeting and battlefield management. The article links its use to a deadly attack on a school, indicating harm to civilians and communities. This constitutes injury or harm to people and harm to communities, which are defined harms under the AI Incident framework. The AI system's involvement in accelerating the kill chain and target selection makes it a contributing factor to the harm. Hence, this event qualifies as an AI Incident.
Thumbnail Image

Como os EUA estão empregando inteligência artificial nos ataques ao Irã

2026-04-07
Diário do Centro do Mundo
Why's our monitor labelling this an incident or hazard?
Project Maven is an AI system explicitly described as integrating and analyzing sensor and imagery data to identify and classify targets and suggest attack options, which are then executed in military operations. The article states that this system has accelerated the speed and scale of attacks, with over a thousand targets hit in the first 24 hours of a named operation. This clearly indicates that the AI system's use has directly led to harm (physical injury or death) in a military conflict context. Ethical concerns and criticisms further support the recognition of significant harm. Therefore, this event qualifies as an AI Incident due to the direct causal role of AI in lethal military actions.
Thumbnail Image

Entenda o programa de Inteligência Artificial dos EUA contra o Irã na guerra - Revista Fórum

2026-04-06
Revista Fórum
Why's our monitor labelling this an incident or hazard?
The Project Maven AI system is explicitly described as an AI system used in military operations to identify and select targets, accelerating the kill chain and enabling attacks on over a thousand targets in a short period. This use of AI directly leads to harm through destruction and potential injury or death, fulfilling the criteria for an AI Incident. The article details the system's development, use, and operational impact, with clear links to realized harm in a conflict setting. Therefore, this event qualifies as an AI Incident under the OECD framework.
Thumbnail Image

Project Maven, o programa de IA utilizado pelos EUA na guerra contra o Irã

2026-04-06
O Povo
Why's our monitor labelling this an incident or hazard?
Project Maven is an AI system used for military targeting and attack decisions. The article reports that the system has likely accelerated the pace of attacks, including one that killed civilians at a school. This is a direct harm to people caused by the use of an AI system. The involvement of Palantir's AI technology as the operational core of the program and the Pentagon's investigation into the deadly strike further confirm the AI system's role in the incident. Therefore, this event qualifies as an AI Incident due to injury and harm to people and communities caused by the AI system's use in warfare.
Thumbnail Image

'Project Maven', o programa de IA utilizado pelos EUA na guerra contra o Irã

2026-04-06
Folha - PE
Why's our monitor labelling this an incident or hazard?
Project Maven is an AI system used for military targeting and battlefield management. The article links its use to a sustained high tempo of attacks, including one that resulted in civilian casualties (a school hit in a deadly attack). This constitutes harm to people and communities caused directly or indirectly by the AI system's use. Therefore, this event qualifies as an AI Incident due to realized harm linked to the AI system's deployment in warfare.
Thumbnail Image

IA na guerra: tecnologia é arma e alvo ao mesmo tempo no confronto no Oriente Médio - IA Brasil Notícias - Tudo sobre inteligência artificial

2026-04-07
IA Brasil Notícias - Tudo sobre inteligência artificial
Why's our monitor labelling this an incident or hazard?
Project Maven is an AI system explicitly described as analyzing sensor and satellite data to identify targets and support military operations, directly influencing actions that cause harm in conflict. The article details how this AI-enabled targeting accelerates attack processes, which can lead to injury, death, and destruction, fulfilling the criteria for harm to persons and property. Furthermore, the missile strikes on data centers supporting AI operations represent harm to critical infrastructure. These facts establish a direct link between AI system use and realized harms, qualifying the event as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Project Maven: como a IA está transformando operações militares dos EUA

2026-04-07
Lorena - Moda, Beleza, Celebridades , Esportes e Reality
Why's our monitor labelling this an incident or hazard?
Project Maven is an AI system explicitly described as integrating and analyzing multiple data sources to identify and classify targets and suggest attack options, which are then executed upon operator confirmation. The system's use in real military operations, such as the "Epic Fury" operation where over a thousand targets were hit in 24 hours, indicates direct AI involvement in causing harm (injury or death, destruction of property). The article also mentions ethical concerns and operational limitations but confirms the AI's active role in lethal decision-making. Hence, this is an AI Incident as the AI system's use has directly led to harm in military conflict.
Thumbnail Image

Alleged US attack on school in Iran: Palantir system in focus

2026-03-31
heise online
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the involvement of Palantir's Maven Smart System, an AI platform that analyzes intelligence data to identify targets. The system's use of outdated data and its role in an accelerated kill chain process directly contributed to the fatal airstrike on a civilian school, causing loss of life. This meets the definition of an AI Incident as the AI system's malfunction and use directly led to harm to people. The article also discusses the system's deployment and plans for expansion, but the primary focus is on the realized harm caused by the AI system's failure in this incident.
Thumbnail Image

The US is waging AI-assisted war on Iran. Here's how.

2026-04-01
Aol
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems in military targeting and operations that have directly led to significant harm to civilians, including deaths and destruction of property. The AI systems are central to the conduct of the war and have contributed to the harm, either directly or indirectly, through their role in target selection and prioritization. This meets the definition of an AI Incident because the development and use of AI systems have directly or indirectly caused harm to people and communities. The article also discusses ongoing investigations and concerns about accountability, reinforcing the realized harm caused by AI involvement.
Thumbnail Image

Palantir UK boss says it's up to militaries to decide how AI targeting is used in war

2026-04-01
BBC
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Maven Smart System) explicitly described as being used operationally in military targeting decisions. The system's outputs have directly influenced strikes that have resulted in civilian casualties, fulfilling the harm criteria (injury or harm to persons). The article details concerns about overreliance on AI outputs and insufficient verification time, indicating the AI's role in causing harm is direct or indirect. This meets the definition of an AI Incident rather than a hazard or complementary information, as harm has already occurred and the AI system's involvement is central to the event.
Thumbnail Image

Palantir UK boss says it's up to militaries to decide how AI targeting is used in war

2026-04-01
BBC
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (Maven) used in military targeting, which is directly linked to potential harm to civilians (harm to persons) if AI misidentifies targets. Although the article does not confirm an AI-caused incident, it presents credible concerns and warnings about the plausible future harm from AI use in lethal military operations. This fits the definition of an AI Hazard, as the AI system's use could plausibly lead to an AI Incident involving injury or death. The discussion of regulatory and oversight responses further supports this classification as a hazard rather than an incident or complementary information.
Thumbnail Image

The US is waging AI-assisted war on Iran. Here's how

2026-04-01
USA Today
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (e.g., Maven Smart System) being used in military targeting and data management during an active conflict, with direct links to civilian deaths, including a school strike causing numerous fatalities. This meets the definition of an AI Incident because the AI system's use has directly or indirectly led to harm to persons (civilian deaths) and communities. The involvement of AI in lethal targeting decisions, even if humans retain final decision authority, is a contributing factor to the harm. The article also discusses concerns about AI accuracy and potential future autonomous weapons, but the realized harm from AI-assisted targeting is the primary focus, making this an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Palantir UK boss says it's up to militaries to decide how AI targeting is used in war - MyJoyOnline

2026-04-01
MyJoyOnline.com
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Palantir's Maven Smart System) used in military targeting decisions. The system's outputs have been used in actual military strikes, some resulting in civilian casualties, which constitutes harm to people. The AI system's role in accelerating and guiding targeting decisions, combined with concerns about insufficient verification and overreliance, shows that the AI system's use has directly or indirectly led to harm. The presence of human oversight is noted but does not negate the AI's pivotal role in the harm caused. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Palantir's 'Workflow' of AI-Directed Death

2026-04-03
Truthdig: Expert Reporting, Current News, Provocative Columnists
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system used in military targeting and killing, with direct involvement in lethal operations that have caused deaths and harm to civilians. The AI system's role in accelerating and automating the kill chain, combined with the reported civilian casualties, constitutes direct harm to persons and communities. Although a human is currently in the loop, the AI system's outputs are pivotal in the decision-making process leading to lethal outcomes. Therefore, this event qualifies as an AI Incident due to direct harm caused by the AI system's use.
Thumbnail Image

War in the hands of the algorithm

2026-03-31
Trabajadores
Why's our monitor labelling this an incident or hazard?
Project Maven is an AI system explicitly described as analyzing large volumes of data to identify targets, influencing military strike decisions. The article reports actual harm resulting from its use, including civilian deaths from a mistaken strike. The AI system's role in accelerating targeting decisions and reducing time for verification directly links it to injury and harm to people, fulfilling the criteria for an AI Incident. The involvement of AI in lethal military operations causing real casualties is a clear case of harm directly linked to AI use.
Thumbnail Image

War in the Hands of the Algorithm

2026-04-02
Periodico26
Why's our monitor labelling this an incident or hazard?
Project Maven is explicitly described as an AI system that processes large volumes of data to identify potential military targets. Its use has directly led to harm, as evidenced by the attack on a school causing civilian deaths. The AI system's involvement in the development and use phases, and its direct link to physical harm, clearly classify this event as an AI Incident under the OECD framework. The article details realized harm caused by the AI system's outputs influencing lethal military actions, not just potential or future harm.