AI Systems Used in US and Israeli Military Operations Cause Lethal Harm

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

AI systems, including Anthropic's Claude, have been actively used by the US and Israel in military operations against Iran and in Gaza, assisting in target identification and decision-making that led to lethal outcomes. Experts warn of the dangers and lack of oversight as AI accelerates modern warfare's lethality.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves AI systems explicitly mentioned as being used for military targeting and decision-making. The AI's use has directly led to harm (deaths and destruction) and potential violations of human rights and humanitarian law. The article details realized harm caused by AI-accelerated military actions, fulfilling the criteria for an AI Incident. The concerns about reduced human oversight and ethical implications further support the classification as an incident rather than a hazard or complementary information.[AI generated]
AI principles
AccountabilitySafety

Industries
Government, security, and defence

Affected stakeholders
General public

Harm types
Physical (death)Human or fundamental rights

Severity
AI incident

AI system task:
Recognition/object detectionReasoning with knowledge structures/planning


Articles about this incident or hazard

Thumbnail Image

'Senjata' Canggih Mematikan Mulai Dipakai di Perang Iran, Ini Tandanya

2026-03-05
CNNindonesia
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly mentioned as being used for military targeting and decision-making. The AI's use has directly led to harm (deaths and destruction) and potential violations of human rights and humanitarian law. The article details realized harm caused by AI-accelerated military actions, fulfilling the criteria for an AI Incident. The concerns about reduced human oversight and ethical implications further support the classification as an incident rather than a hazard or complementary information.
Thumbnail Image

China Bangun Segudang Senjata Canggih, Amerika Tak Bisa Kabur

2026-03-05
CNBCindonesia
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems being developed and used in military applications by China and the US, including autonomous drones and AI decision-making systems. These systems are intended for combat and intelligence purposes, which inherently carry risks of injury, death, and disruption. Since the article does not report a specific harmful event but rather ongoing development and deployment with potential for harm, this fits the definition of an AI Hazard. The presence of AI systems is clear, their use is described, and the plausible future harm is credible given the military context and capabilities described.
Thumbnail Image

AI sebagai "Front Baru" dalam Perang di Timur Tengah

2026-03-05
Kompas.id
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly used in lethal military operations that have caused physical harm and loss of life, fulfilling the criteria for an AI Incident under harm to persons and communities. The use of AI to generate and spread disinformation causing harm to communities and the environment of information further supports this classification. The article reports actual harms occurring due to AI use, not just potential risks, so it is not an AI Hazard or Complementary Information. Therefore, the event is best classified as an AI Incident.
Thumbnail Image

Peran 'Otak' Digital: AI Claude Milik Anthropic Membantu Militer AS Gempur Iran

2026-03-04
investor.id
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system (Claude) in military operations that involve target identification and decision-making in attacks against Iran. The AI's involvement in these lethal operations directly relates to potential harm to people and communities, fulfilling the criteria for an AI Incident. The mention of AI hallucinations causing misidentification of targets further supports the presence of realized or imminent harm. The lack of regulatory oversight and ethical concerns reinforce the seriousness of the incident. Therefore, this event is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

AI Jadi Mesin Perang Mematikan, Pakar Ungkap Bahayanya

2026-03-04
detiki net
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems being used in active military operations that have resulted in lethal actions, such as targeting and strikes, which constitute harm to people. The AI systems are described as decision support tools that influence real-world lethal outcomes, with concerns about their reliability and human oversight. This fits the definition of an AI Incident because the AI's use has directly led to harm (injury or death) in conflict zones. The article does not merely warn about potential future harm but reports ongoing use and consequences, so it is not an AI Hazard or Complementary Information. It is not unrelated because AI involvement is central to the event described.
Thumbnail Image

Militerisasi AI |Republika Online

2026-03-05
Republika Online
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems being used in real military operations to analyze intelligence and assist in targeting, which directly relates to harm to people (injury or death) and communities due to military actions. The AI's role in accelerating the kill chain and influencing targeting decisions, even if human verification is required, means the AI system's use is a contributing factor to harm. This fits the definition of an AI Incident because the AI's use has directly led to or is part of events causing harm. The article does not merely discuss potential future risks or general AI developments but reports on actual AI deployment in military operations with associated harms, thus qualifying as an AI Incident.