Palantir AI Systems Linked to Human Rights Violations in ICE Operations

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Amnesty International reports that Palantir’s AI-powered data analytics tools have facilitated U.S. ICE operations resulting in human rights violations, including family separations and detentions of migrants. The company failed to conduct adequate human rights due diligence, contributing to these harms through its technology’s use in enforcement actions.[AI generated]

Why's our monitor labelling this an incident or hazard?

Palantir's technology, which includes AI systems for data mining and tracking, has been used by ICE to identify and arrest migrants and asylum-seekers, resulting in human rights violations. The report by Amnesty International documents these harms and criticizes Palantir for failing to conduct human rights due diligence. Since the AI system's use has directly contributed to these harms, this qualifies as an AI Incident under the OECD framework.[AI generated]
AI principles
Respect of human rightsAccountabilityTransparency & explainability

Industries
Government, security, and defence

Affected stakeholders
General publicChildrenOther

Harm types
Human or fundamental rights

Severity
AI incident

Business function:
Compliance and justice

AI system task:
Forecasting/prediction


Articles about this incident or hazard

Thumbnail Image

Palantir's ICE Contracts 'Raise Human Rights Concerns', Report Warns As Firm Prepares To Go Public

2020-09-28
Forbes
Why's our monitor labelling this an incident or hazard?
Palantir's technology, which includes AI systems for data mining and tracking, has been used by ICE to identify and arrest migrants and asylum-seekers, resulting in human rights violations. The report by Amnesty International documents these harms and criticizes Palantir for failing to conduct human rights due diligence. Since the AI system's use has directly contributed to these harms, this qualifies as an AI Incident under the OECD framework.
Thumbnail Image

Analysis | The Technology 202: Activists slam Palantir for its work with ICE ahead of market debut

2020-09-29
Washington Post
Why's our monitor labelling this an incident or hazard?
Palantir's software is an AI system used by ICE for operations that activists and Amnesty International claim contribute to human rights violations and racial profiling. The article highlights credible concerns about harm to fundamental rights caused by the AI system's use, which meets the definition of an AI Incident. The harm is indirect but significant, involving violations of human rights through the AI system's facilitation of ICE activities. The company's refusal to guarantee non-use for raids or deportations further supports the risk of harm.
Thumbnail Image

Amnesty International slams Palantir's human rights record

2020-09-28
CBS News
Why's our monitor labelling this an incident or hazard?
Palantir's software is an AI system used to analyze and integrate data for law enforcement purposes. Its deployment by ICE has been linked by Amnesty International and other human rights groups to harmful actions against vulnerable populations, including arrests and deportations. These actions constitute violations of human rights, which are harms defined under the AI Incident framework. Although Palantir disputes some claims, the credible allegations and documented use of their AI system in these contexts justify classification as an AI Incident due to direct or indirect harm caused by the AI system's use.
Thumbnail Image

Palantir Admits to Helping ICE Deport Immigrants While Trying to Prove It Doesn't

2020-09-29
VICE
Why's our monitor labelling this an incident or hazard?
The event involves an AI system developed and used by Palantir for surveillance that supports ICE's deportation activities. Amnesty International's briefing highlights that Palantir's failure to conduct human rights due diligence contributed to abuses, implying the AI system's role in causing harm. Therefore, this qualifies as an AI Incident due to indirect human rights violations linked to the AI system's use.
Thumbnail Image

Amnesty International attacks Palantir's human rights record on the eve of its IPO

2020-09-30
Fast Company
Why's our monitor labelling this an incident or hazard?
Palantir's software is an AI system used to analyze large amounts of unstructured data to support law enforcement operations. Amnesty International's report highlights that this software has been used by ICE to facilitate actions that have led to human rights violations, including the separation of children from their families and raids targeting migrants. Although Palantir denies direct involvement with certain enforcement divisions, evidence suggests its software is used in ways that contribute to these outcomes. Therefore, the AI system's use has directly or indirectly led to harm in the form of human rights violations, qualifying this event as an AI Incident.
Thumbnail Image

Palantir Technologies Contracts Raise Human Rights Concerns Before

2020-09-28
Common Dreams
Why's our monitor labelling this an incident or hazard?
Palantir's technology, which includes AI systems used to identify, track, and investigate migrants and asylum-seekers, has been linked to harmful ICE operations resulting in human rights violations such as family separations and detentions. The briefing documents realized harm caused by the use of these AI systems, and the failure of Palantir to conduct due diligence or prevent misuse constitutes a direct or indirect contribution to these harms. Therefore, this event qualifies as an AI Incident due to violations of human rights caused by the use of AI systems.
Thumbnail Image

Amnesty International accuses Palantir Technologies of contributing to human rights violations

2020-09-30
RNZ
Why's our monitor labelling this an incident or hazard?
Palantir's data analysis software qualifies as an AI system due to its advanced data processing capabilities. Its use by ICE has allegedly contributed to human rights violations, fulfilling the criteria for an AI Incident as the AI system's use has directly or indirectly led to harm (violation of human rights). The article reports an accusation of actual harm, not just potential risk, thus classifying this as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Palantir Technologies Contracts Raise Human Rights Concerns Before NYSE Direct Listing

2020-09-30
Scoop
Why's our monitor labelling this an incident or hazard?
Palantir's AI systems are explicitly mentioned as facilitating ICE operations that have caused violations of human rights, including harm to migrants and asylum-seekers through arrests, family separations, and detentions. Amnesty International documents these harms and highlights Palantir's failure to conduct due diligence or prevent misuse of its technology. The involvement of AI in causing these realized harms to fundamental rights fits the definition of an AI Incident, as the AI system's use has directly led to violations of human rights.
Thumbnail Image

Palantir Technologies Contracts Raise Human Rights Concerns Before NYSE Direct Listing

2020-09-30
Scoop
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (Palantir's ICM and FALCON technologies) used by ICE to carry out operations that have directly led to violations of human rights, including harm to vulnerable groups such as migrants and asylum-seekers. The harms described (family separations, detentions, deportations) fall under violations of human rights and harm to communities. The AI system's use is directly linked to these harms, making this an AI Incident. The report is not merely a warning or potential risk but documents realized harms facilitated by AI technology.