Palantir's AI Systems Used for Surveillance and Military Operations in Ukraine and Lithuania Raise Human Rights Concerns

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Palantir's AI technology, backed by Peter Thiel, is being deployed for military surveillance, battlefield decision-making, and reconstruction in Ukraine and Lithuania. While intended to enhance defense and rebuilding, its use has led to political repression and raises significant ethical and human rights concerns regarding intrusive surveillance and potential misuse in conflict zones.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly describes AI systems used in open-source intelligence analysis that have been applied in real military conflict (Russia-Ukraine war), where AI-enabled satellite image analysis and social media data processing led to identification of targets and subsequent attacks. This constitutes direct or indirect harm to persons and communities (harm category a and d). The involvement of AI in these operations is clear and central. Additionally, the cooperation to deploy AI in financial asset recovery and government intelligence further supports the presence of AI systems with potential for harm. The article does not merely discuss potential or future risks but reports actual AI use with harmful outcomes, thus qualifying as an AI Incident rather than a hazard or complementary information.[AI generated]
AI principles
AccountabilityPrivacy & data governanceRespect of human rightsRobustness & digital securitySafetyTransparency & explainabilityDemocracy & human autonomy

Industries
Government, security, and defenceDigital security

Affected stakeholders
General publicCivil society

Harm types
Human or fundamental rightsPublic interestPsychologicalPhysical (injury)Physical (death)

Severity
AI incident

Business function:
Monitoring and quality controlLogisticsPlanning and budgetingICT management and information security

AI system task:
Recognition/object detectionEvent/anomaly detectionForecasting/predictionReasoning with knowledge structures/planningGoal-driven organisation


Articles about this incident or hazard

Thumbnail Image

Palantir Signs on For Reconstruction Work in War-Torn Ukraine

2023-05-25
Bloomberg Business
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems used by Palantir for battlefield decision-making and reconstruction planning in Ukraine, confirming AI system involvement. However, it does not describe any realized harm such as injury, rights violations, or disruption caused by these AI systems. The military use of AI raises concerns about potential risks, but no incident or near-miss is reported. The reconstruction use of AI is framed positively. Thus, the event provides supporting information about AI deployment and its societal and governance context rather than reporting an incident or hazard. This fits the definition of Complementary Information.
Thumbnail Image

Palantir and Ministry of Digital Transformation of Ukraine strike reconstruction partnership

2023-05-25
Eagle-Tribune
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (Palantir's software platform) in the context of defense and reconstruction in Ukraine. While the partnership aims to support positive outcomes such as reconstruction and defense, the use of AI in a conflict zone and for defense purposes inherently involves risks of harm, including potential indirect harm to people or infrastructure if the system malfunctions or is misused. However, the article does not report any actual harm or incident resulting from the AI system's use; it describes planned or ongoing cooperation and intended use. Therefore, this event does not describe an AI Incident (no harm realized) but does describe a plausible future risk scenario where AI use in defense and reconstruction could lead to harm. Hence, it qualifies as an AI Hazard due to the plausible potential for harm related to AI use in a conflict and reconstruction context.
Thumbnail Image

AI on the battlefield: Next stop for Peter Thiel after PayPal, Hulk Hogan, Trump and Facebook

2023-05-24
EL PAÍS English Edition
Why's our monitor labelling this an incident or hazard?
The article centers on AI systems being developed and deployed for military use, which could plausibly lead to significant harms such as increased violence, violations of human rights, and escalation of armed conflicts. Although no concrete incident of harm is described, the credible risk of AI-enabled autonomous weapons and battlefield decision-making systems causing harm is emphasized. Therefore, this qualifies as an AI Hazard rather than an AI Incident or Complementary Information. The article does not report a realized harm but discusses the plausible future risks and ethical concerns associated with military AI.
Thumbnail Image

数据公司Palantir CEO:呼吁暂停AI研发的企业是因...

2023-06-09
东方财富网
Why's our monitor labelling this an incident or hazard?
The article involves AI systems explicitly, particularly Palantir's AI platform for military use, which qualifies as an AI system. However, it does not describe any realized harm or incident resulting from AI use or malfunction. Nor does it describe a specific event where AI could plausibly lead to harm imminently. The content is primarily about strategic positions and industry competition regarding AI development pace, which fits the definition of Complementary Information as it provides context and governance-related discourse rather than reporting an AI Incident or AI Hazard.
Thumbnail Image

吉宏股份——携手颐信科技,直追美国AI应用龙头Palantir

2023-06-12
雪球
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems used in open-source intelligence analysis that have been applied in real military conflict (Russia-Ukraine war), where AI-enabled satellite image analysis and social media data processing led to identification of targets and subsequent attacks. This constitutes direct or indirect harm to persons and communities (harm category a and d). The involvement of AI in these operations is clear and central. Additionally, the cooperation to deploy AI in financial asset recovery and government intelligence further supports the presence of AI systems with potential for harm. The article does not merely discuss potential or future risks but reports actual AI use with harmful outcomes, thus qualifying as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Palantir:AI最重要運用為軍事、暫停恐遭敵方反超-MoneyDJ理財網

2023-06-12
MoneyDJ理財網
Why's our monitor labelling this an incident or hazard?
Palantir's AI system is explicitly mentioned as being used by the Ukrainian military to improve artillery effectiveness, which directly impacts military conflict and causes harm to opposing forces. This constitutes harm to groups of people (military personnel) and communities involved in the conflict, fitting the definition of an AI Incident. The article does not merely discuss potential risks or general AI development but describes realized use of AI in a military context causing harm, thus qualifying as an AI Incident.
Thumbnail Image

Palantir:AI 最重要運用為軍事,暫停恐遭敵方反超

2023-06-12
TechNews 科技新報 | 市場和業內人士關心的趨勢、內幕與新聞
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems used in military operations, specifically Palantir's AI technology aiding Ukrainian forces. The use of AI to increase artillery precision and lethality indicates AI's direct role in conflict, which can cause harm (injury or death) in warfare. However, the article does not describe a specific event where AI malfunctioned or caused unintended harm, nor does it report a realized incident of harm caused by AI misuse or failure. Instead, it discusses the ongoing competition and strategic importance of AI in military contexts, including potential future risks if development is paused. Therefore, it fits best as an AI Hazard, as it highlights plausible future harms and risks associated with AI military applications and the geopolitical AI arms race, rather than a concrete AI Incident or Complementary Information.