Palantir Provides AI Targeting Tools to Israel in Gaza War

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Palantir's CEO Alex Karp said the company has ramped up supplying Israel with new AI-enabled data analysis and targeting tools since the October 7 Hamas attack. The AI models help identify targets and propose airstrikes, raising ethical concerns over civilian harm amid the Gaza conflict.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly states that Palantir provides AI tools used by Israel in military operations during an active war, including AI that helps identify targets and propose airstrikes. The use of AI in this context directly contributes to harm to people and communities, fulfilling the criteria for an AI Incident. The AI system's use in warfare, where injury and harm to persons are occurring, is a direct link to harm. The article also notes the controversy and ethical concerns around AI in military use, reinforcing the significance of the harm caused. Thus, this is an AI Incident rather than a hazard or complementary information.[AI generated]
AI principles
AccountabilityRespect of human rightsSafetyTransparency & explainabilityDemocracy & human autonomyPrivacy & data governanceRobustness & digital securityHuman wellbeing

Industries
Government, security, and defenceDigital security

Affected stakeholders
General public

Harm types
Physical (death)Physical (injury)Human or fundamental rightsPublic interestPsychologicalEconomic/Property

Severity
AI incident

Business function:
Monitoring and quality control

AI system task:
Organisation/recommendersRecognition/object detectionForecasting/predictionGoal-driven organisation


Articles about this incident or hazard

Thumbnail Image

Palantir CEO says Japan should build AI defense targeting system with U.S.

2025-03-16
Nikkei Asia
Why's our monitor labelling this an incident or hazard?
The article involves AI systems in the context of defense targeting, which could plausibly lead to significant harm if deployed or misused. However, it only discusses the idea and encouragement for collaboration, without any actual deployment, malfunction, or harm occurring. Therefore, it represents a plausible future risk (AI Hazard) rather than an incident or complementary information.
Thumbnail Image

Palantir's Karp Inks New Manufacturing Deals With Defense Startups

2025-03-13
mint
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems being used by Palantir and its partners for various applications, indicating AI system involvement. However, there is no indication that these AI systems have caused or contributed to any harm, injury, rights violations, or disruptions. The article is primarily about new business deals and the strategic importance of AI in manufacturing and defense, which fits the description of Complementary Information as it provides context and updates on AI ecosystem developments without describing an incident or hazard.
Thumbnail Image

Palantir allegedly supplying Israel with AI tools amid Israel's war in Gaza - Business & Human Rights Resource Centre

2025-03-13
Business & Human Rights
Why's our monitor labelling this an incident or hazard?
The article explicitly states that Palantir provides AI tools used by Israel in military operations during an active war, including AI that helps identify targets and propose airstrikes. The use of AI in this context directly contributes to harm to people and communities, fulfilling the criteria for an AI Incident. The AI system's use in warfare, where injury and harm to persons are occurring, is a direct link to harm. The article also notes the controversy and ethical concerns around AI in military use, reinforcing the significance of the harm caused. Thus, this is an AI Incident rather than a hazard or complementary information.