Palantir AI Systems Enable Surveillance and Military Harm in US and Gaza

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Palantir's AI platforms, including Gotham and Maven, have been used by US government agencies for mass surveillance, immigration enforcement, and by the Israeli military for targeting in Gaza. These applications have led to privacy violations, civil rights concerns, and direct harm to civilians, raising significant ethical and human rights issues.[AI generated]

Why's our monitor labelling this an incident or hazard?

Palantir's AI systems are explicitly described as being used for military targeting and surveillance, including in conflict zones, which directly leads to harm to communities and potential violations of human rights. The article details the company's development and use of AI for these purposes, including the use of an AI model for missile targeting in Gaza, which constitutes direct harm. Therefore, the event meets the criteria for an AI Incident due to the realized harm caused by the AI systems' use.[AI generated]
AI principles
AccountabilityFairnessPrivacy & data governanceRespect of human rightsSafetyTransparency & explainabilityDemocracy & human autonomy

Industries
Government, security, and defence

Affected stakeholders
General public

Harm types
Human or fundamental rightsPhysical (injury)Physical (death)

Severity
AI incident

Business function:
Compliance and justice

AI system task:
Organisation/recommendersEvent/anomaly detectionForecasting/prediction


Articles about this incident or hazard

Thumbnail Image

استثمر فيها إبستين واتهمها سنودن بالتجسس.. فما هي "بالانتير"؟

2025-12-04
Aljazeera
Why's our monitor labelling this an incident or hazard?
Palantir's AI systems are explicitly described as being used for military targeting and surveillance, including in conflict zones, which directly leads to harm to communities and potential violations of human rights. The article details the company's development and use of AI for these purposes, including the use of an AI model for missile targeting in Gaza, which constitutes direct harm. Therefore, the event meets the criteria for an AI Incident due to the realized harm caused by the AI systems' use.
Thumbnail Image

بالانتير.. شركة حولت البيانات إلى سلاح عسكري غامض

2025-12-04
صحيفة عكاظ
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems used by Palantir for data collection, analysis, and military applications, including targeting and surveillance. Although it does not report a concrete incident of harm, the described use cases and the strategic military applications imply a credible risk of future harms related to privacy violations, human rights breaches, and security threats. Therefore, this situation fits the definition of an AI Hazard, as the development and use of these AI systems could plausibly lead to AI Incidents involving significant harms.
Thumbnail Image

شركات أمريكية مدعومة بالذكاء الاصطناعي لمراقبة الفلسطينيين في غزة

2025-12-06
صحيفة المنار
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (Palantir's Maven platform and Dataminr's AI-based social media monitoring) being used in military and intelligence contexts that have led to harm, including lethal airstrikes and pervasive surveillance of civilians. The AI systems' outputs are directly used to guide military operations and monitor populations, which has resulted in violations of human rights and harm to communities. This meets the criteria for an AI Incident, as the AI system's use has directly led to significant harm. The involvement is not hypothetical or potential but ongoing and realized, thus excluding classification as an AI Hazard or Complementary Information.
Thumbnail Image

كيف تمكن شركة واحدة الحكومة الأميركية من "رؤية كل شيء"؟

2025-12-04
اندبندنت عربية
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses an AI system ('Gotham' by Palantir) used by government agencies to process and analyze data for immigration enforcement, which has directly led to harms including privacy violations, potential civil rights infringements, and social harms related to surveillance and deportation. The system's role is pivotal in enabling these harms through its data integration and profiling capabilities. The harms are realized and ongoing, not merely potential. Hence, the event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

استثمر فيها إبستين واتهمها سنودن بالتجسس.. فما هي "بالانتير"؟

2025-12-04
الجزيرة نت
Why's our monitor labelling this an incident or hazard?
Palantir's AI systems are explicitly described as being used for military targeting and surveillance, which have directly led to harm to people and communities, fulfilling the criteria for an AI Incident. The article details the company's development and use of AI in ways that have caused or facilitated harm, including violations of human rights and harm to communities in conflict zones. The involvement of AI in these harms is clear and direct, not merely potential or speculative. Hence, the classification as an AI Incident is justified.
Thumbnail Image

شركة حولت البيانات إلى سلاح عسكري غامض - سواليف

2025-12-06
سواليف
Why's our monitor labelling this an incident or hazard?
The article focuses on the development and use of AI systems by Palantir for surveillance and military purposes, which could plausibly lead to harms such as violations of privacy, human rights abuses, and geopolitical risks. However, it does not describe a concrete incident where harm has already occurred due to these AI systems. Therefore, the event is best classified as an AI Hazard, reflecting the credible risk posed by the deployment of such AI-enabled surveillance and targeting technologies in sensitive contexts.