Palantir AI Systems Used in Israeli Military Operations Causing Civilian Harm

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

US-based Palantir's AI technologies, including data analysis and targeting platforms, have been used by the Israeli military in Gaza, Iran, and Lebanon, directly contributing to lethal operations and civilian harm. These AI systems facilitated surveillance, target identification, and decision-making in military actions, raising concerns over human rights violations.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions AI systems developed by Palantir being used by military forces in active conflict zones, with direct involvement in target identification and attack operations that have caused harm and casualties. This meets the definition of an AI Incident because the AI system's use has directly led to injury and harm to groups of people, as well as violations of human rights. The article also discusses the AI systems' role in surveillance and lethal operations, confirming the AI system's pivotal role in causing harm. Hence, this is classified as an AI Incident.[AI generated]
AI principles
Respect of human rightsAccountability

Industries
Government, security, and defence

Affected stakeholders
General public

Harm types
Physical (death)Human or fundamental rights

Severity
AI incident

AI system task:
Recognition/object detectionReasoning with knowledge structures/planning


Articles about this incident or hazard

Thumbnail Image

TEKNOKRASİ VE TEKNOFAŞİZM - Palantir'in geliştirdiği yapay zeka teknolojileri, İsrail ordusu tarafından kullanılıyor

2026-05-10
Haberler
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems developed by Palantir being used by military forces in active conflict zones, with direct involvement in target identification and attack operations that have caused harm and casualties. This meets the definition of an AI Incident because the AI system's use has directly led to injury and harm to groups of people, as well as violations of human rights. The article also discusses the AI systems' role in surveillance and lethal operations, confirming the AI system's pivotal role in causing harm. Hence, this is classified as an AI Incident.
Thumbnail Image

TEKNOKRASİ VE TEKNOFAŞİZM - Palantir'in geliştirdiği yapay zeka teknolojileri, İsrail ordusu tarafından kullanılıyor - Ankara Haberleri

2026-05-10
HABERTURK.COM
Why's our monitor labelling this an incident or hazard?
Palantir's AI systems are explicitly mentioned as being used by the Israeli military for target identification and data analysis. The reported use of these AI technologies in military operations linked to harm in Gaza and other regions indicates direct or indirect involvement of AI in causing harm to people and communities, including possible human rights violations. Therefore, this event qualifies as an AI Incident due to the realized harm associated with the AI system's use in conflict and military targeting.
Thumbnail Image

İsrail'in dijital katliam ağı: Palantir teknolojileri siyonist ordunun hizmetinde

2026-05-10
Sabah
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly described as being used for military targeting and surveillance that have directly contributed to lethal operations and human rights violations, fulfilling the criteria for an AI Incident. The harms include injury and death, as well as breaches of fundamental rights through invasive data collection and targeting. The article provides concrete examples of AI use in active conflict causing real harm, not just potential or hypothetical risks. Therefore, it is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

ABD ve İsrail'den ortak kahpelik! Skandalı eski Microsoft çalışanı ifşaladı

2026-05-10
Yeni Akit Gazetesi
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems developed and deployed by Palantir being used in military operations that have caused harm to people, including targeted killings and surveillance facilitating lethal attacks. The harms include injury and death, violations of human rights, and harm to communities, all directly linked to the AI systems' use. The involvement of AI in target identification and decision-making in these operations is clear, and the harms are realized, not hypothetical. Hence, this is an AI Incident as per the definitions provided.
Thumbnail Image

Palantir'in geliştirdiği yapay zeka teknolojileri, İsrail ordusu tarafından kullanılıyor

2026-05-10
Anadolu Ajansı
Why's our monitor labelling this an incident or hazard?
The event involves AI systems developed and used by Palantir for military target identification and data analysis, which have been directly linked to military operations causing harm to civilians and potential human rights violations. The article provides evidence of actual use of these AI systems in conflict zones with resulting harm, not just potential or hypothetical risks. Therefore, this is an AI Incident because the AI system's use has directly led to harm to people and violations of rights.
Thumbnail Image

Teknokrasi ve emperyalizm: Yapay zeka devleri ABD/İsrail katliamlarını nasıl besliyor?

2026-05-10
Aydınlık
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (Palantir's platforms and AI integration with Pentagon programs) used in military targeting and intelligence that have been employed in real-world lethal operations causing harm to people and communities. This constitutes direct harm resulting from the use of AI systems, fulfilling the criteria for an AI Incident. The article also includes expert warnings about risks but the primary focus is on actual AI-enabled harm occurring in conflict zones, not just potential harm or governance responses. Therefore, the event is classified as an AI Incident.