Palantir's ELITE AI System Used by ICE to Target and Raid Immigrant Communities

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

U.S. Immigration and Customs Enforcement (ICE) uses Palantir's ELITE AI system to identify, map, and prioritize deportation targets using data analytics and confidence scores. The system's deployment has led to mass detentions, raids, and alleged human rights violations, raising serious legal and ethical concerns about AI-driven law enforcement in the United States.[AI generated]

Why's our monitor labelling this an incident or hazard?

ELITE is an AI system that processes multiple data sources to generate actionable outputs (confidence scores, target lists, geospatial mapping) used by ICE to conduct raids and arrests. The use of this system has directly led to harm, including the killing of a U.S. citizen and mass detentions, which are violations of human rights and cause harm to communities. Therefore, this event qualifies as an AI Incident due to the direct link between the AI system's use and realized harm.[AI generated]
AI principles
AccountabilityFairnessHuman wellbeingPrivacy & data governanceRespect of human rightsTransparency & explainabilityDemocracy & human autonomy

Industries
Government, security, and defence

Affected stakeholders
General public

Harm types
Human or fundamental rightsPsychologicalPublic interest

Severity
AI incident

Business function:
Compliance and justice

AI system task:
Forecasting/predictionGoal-driven organisation


Articles about this incident or hazard

Thumbnail Image

'ELITE': The Palantir App ICE Uses to Find Neighborhoods to Raid

2026-01-18
freedomsphoenix.com
Why's our monitor labelling this an incident or hazard?
ELITE is an AI system that processes multiple data sources to generate actionable outputs (confidence scores, target lists, geospatial mapping) used by ICE to conduct raids and arrests. The use of this system has directly led to harm, including the killing of a U.S. citizen and mass detentions, which are violations of human rights and cause harm to communities. Therefore, this event qualifies as an AI Incident due to the direct link between the AI system's use and realized harm.
Thumbnail Image

ICE using data and probability to decide where to detain and arrest people | Biometric Update

2026-01-16
Biometric Update | Biometrics News, Companies and Explainers
Why's our monitor labelling this an incident or hazard?
The ELITE system is an AI system that uses data fusion, analytics, and probabilistic scoring to inform enforcement decisions. Its use has directly influenced ICE's operational tactics, leading to detentions and questioning of individuals without individualized probable cause, raising constitutional and legal issues. The harms include violations of human rights and constitutional rights (Fourth Amendment protections), which are explicitly described and have materialized in practice. The AI system's role is central to these harms, as it guides where and whom to detain based on probabilistic data rather than concrete evidence. Thus, the event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

'ELITE': The Palantir App ICE Uses To Find Neighborhoods To Raid " Sons of Liberty Media

2026-01-18
Sons of Liberty Media
Why's our monitor labelling this an incident or hazard?
The article explicitly describes ELITE as an AI system employing advanced analytics and data integration to support ICE's deportation operations. The system's outputs directly inform ICE raids and arrests, which have resulted in harm to individuals and communities, including a fatal shooting. This constitutes a violation of human rights and harm to communities, fulfilling the criteria for an AI Incident. The AI system's development and use are central to the harms described, making this event an AI Incident rather than a hazard or complementary information.
Thumbnail Image

'ELITE': The Palantir App ICE Uses to Find Neighborhoods to Raid

2026-01-19
sgtreport.com
Why's our monitor labelling this an incident or hazard?
ELITE is an AI system as it uses advanced analytics, geospatial mapping, and confidence scoring to infer and prioritize targets for enforcement. Its use by ICE to identify and detain individuals has directly led to harm in the form of arrests and detentions, which constitute violations of human rights and possibly other legal rights. The article provides concrete examples of the system's deployment resulting in real-world enforcement actions, thus meeting the criteria for an AI Incident due to direct harm caused by the AI system's use.
Thumbnail Image

Data, quotas, and biometric surveillance are reshaping US immigration enforcement | Biometric Update

2026-01-20
Biometric Update | Biometrics News, Companies and Explainers
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems (ELITE and Mobile Fortify) used in immigration enforcement that rely on probabilistic assessments and biometric identification to target individuals and locations. These systems have directly led to harms including wrongful arrests, misidentifications, erosion of constitutional rights, and expansion of surveillance to political protestors. The AI systems' outputs are pivotal in guiding enforcement actions that cause these harms. The misuse and malfunction of these AI tools (e.g., false biometric matches without audit or correction) further exacerbate the harm. The event meets the criteria for an AI Incident because the AI system's use and malfunction have directly and indirectly caused violations of human rights and harm to communities.
Thumbnail Image

ICE are using a terrifying Palantir app to hunt their targets

2026-01-20
Canary
Why's our monitor labelling this an incident or hazard?
The app ELITE is an AI system as it uses advanced analytics and geospatial data to generate outputs that influence real-world enforcement actions. Its use by ICE agents to identify and prioritize targets for deportation directly leads to harm to individuals and communities, including potential violations of fundamental rights. The description of the app's functionalities and its deployment in operations that result in targeting and raids confirms direct involvement of AI in causing harm. Therefore, this event qualifies as an AI Incident due to the realized harm linked to the AI system's use.
Thumbnail Image

ICE alleged to use Palantir-developed tool that uses Medicaid data to track arrest targets | Fortune

2026-01-26
Fortune
Why's our monitor labelling this an incident or hazard?
The ELITE tool is an AI system that processes complex government data to generate actionable outputs for ICE enforcement. Its use has directly led to the targeting and arrest of individuals, implicating violations of human rights and privacy. The article details the actual deployment and use of this AI system in enforcement actions, not just potential or future risks. Hence, it meets the criteria for an AI Incident as the AI system's use has directly led to harm (violation of rights and harm to communities).
Thumbnail Image

Palantir Defends Work With ICE to Staff Following Killing of Alex Pretti

2026-01-26
DNYUZ
Why's our monitor labelling this an incident or hazard?
The article explicitly describes Palantir's AI system being used by ICE to support enforcement operations that have led to harm, including the killing of a person and wrongful detentions. The AI system's role in providing data and operational support to ICE agents is a direct contributing factor to these harms, which include violations of human rights and harm to communities. The presence of an AI system is clear from the description of Palantir's platform providing real-time data integration and targeting capabilities. The harms are realized and ongoing, not merely potential, thus classifying this as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Even Palantir Staff Are Now Disgusted With ICE

2026-01-27
Futurism
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions Palantir's AI surveillance and data analysis tools used by ICE to create dossiers and track individuals for deportation, which constitutes an AI system. The use of these systems has directly led to harms including racial profiling, unlawful detention, and potential violations of human rights, fulfilling the criteria for an AI Incident. The internal employee protests and company responses provide context but do not negate the realized harm caused by the AI system's deployment. Hence, the event is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Even Palantir Staff Are Now Disgusted With ICE

2026-01-27
DNYUZ
Why's our monitor labelling this an incident or hazard?
Palantir's AI systems are explicitly mentioned as being used by ICE to track and target individuals for deportation, which has resulted in serious harms including racial profiling, unlawful detention, and a fatal shooting by federal agents. The AI system's involvement is indirect but pivotal in enabling these harms. The internal employee unrest and company responses further confirm the connection between the AI system's use and the harms described. Hence, this event meets the criteria for an AI Incident under violations of human rights and harm to communities caused directly or indirectly by the AI system's use.
Thumbnail Image

Activists urge N.J. pension fund to drop ICE contractor that tracks immigrants

2026-01-28
NJ.com
Why's our monitor labelling this an incident or hazard?
Palantir's AI system is explicitly mentioned as being used by ICE to track immigrants, which involves AI-driven data analysis and case management. The activists' concerns and the references to fatal shootings and racial profiling indicate that the AI system's use has contributed to violations of human rights and harm to communities. The event describes realized harm linked to the AI system's deployment, not just potential harm. Hence, it meets the criteria for an AI Incident due to indirect harm caused by the AI system's use in immigration enforcement.
Thumbnail Image

ICE Is Using Palantir's AI Tools to Sort Through Tips

2026-01-28
Wired
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Palantir's AI-enhanced tip processing tool using large language models) being used operationally by ICE. There is no report of direct or indirect harm resulting from this AI system's use, such as injury, rights violations, or other harms. The article focuses on describing the AI system's deployment, its intended function, and some internal reactions within Palantir. Therefore, it does not meet the criteria for an AI Incident or AI Hazard. Instead, it provides additional context and insight into AI use in government enforcement, fitting the definition of Complementary Information.
Thumbnail Image

US data firm Palantir used by ICE and Israel Defense Forces 'should be rejected by London NHS trust' - My London

2026-01-28
getwestlondon
Why's our monitor labelling this an incident or hazard?
The article describes the deployment and use of an AI-enabled data platform by the NHS, which processes sensitive personal health data. The concerns raised relate to potential violations of privacy and data protection, which fall under violations of human rights and fundamental rights. However, the article does not report any actual breach or misuse of data or harm caused by the AI system so far. The concerns and protests indicate a plausible risk that the AI system's use could lead to harm in the future if data is mishandled or accessed improperly. Thus, the event fits the definition of an AI Hazard, as it plausibly could lead to an AI Incident involving privacy and rights violations, but no incident has yet occurred.
Thumbnail Image

Britain's Ministry of Defence agrees deal with Palantir

2026-01-28
TheRegister.com
Why's our monitor labelling this an incident or hazard?
While Palantir's data analytics capabilities likely involve AI systems, the article does not describe any incident or hazard involving harm caused or plausibly caused by these AI systems. The concerns raised are political and ethical but do not describe realized or potential AI harms as defined. Therefore, this event is best classified as Complementary Information, providing context on AI system deployment and governance issues without reporting an AI Incident or AI Hazard.
Thumbnail Image

Palantir: Why is the Israel-linked surveillance firm embedded in Britain's NHS?

2026-01-28
Middle East Eye
Why's our monitor labelling this an incident or hazard?
Palantir's AI systems are explicitly involved in managing sensitive health data and military targeting applications. The article documents realized harms such as loss of patient trust, potential misuse of health data for surveillance and military purposes, and involvement in lethal operations causing civilian casualties. These constitute violations of rights and harm to communities and individuals. The integration of Palantir's AI in NHS operations and military contracts, combined with documented adverse effects and ethical concerns, meets the criteria for an AI Incident. The event is not merely a potential risk or complementary information but involves ongoing harm linked to AI system use.
Thumbnail Image

Revealed: Australia's $100 million investment in controversial surveillance giant Palantir

2026-01-29
Crikey
Why's our monitor labelling this an incident or hazard?
The article involves an AI system provider (Palantir) known for AI surveillance tools, and the investment by Australia's Future Fund indicates support for these AI systems. However, the article does not describe a realized harm or incident caused by Palantir's AI systems, only the controversial nature and potential implications of their use. Therefore, this is best classified as Complementary Information, as it provides context and background on AI ecosystem developments and governance concerns without reporting a specific AI Incident or AI Hazard.
Thumbnail Image

ICE Is Using Palantir's AI Tools to Sort Through Tips

2026-01-28
DNYUZ
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI systems (large language models) by ICE to process tips, confirming AI system involvement. The AI is used operationally to generate summaries and translations, indicating use rather than development or malfunction. There is no report of injury, rights violations, or other harms directly or indirectly caused by the AI system's outputs. The article focuses on describing the AI system's deployment and operational context, including internal company discussions and public transparency efforts. Therefore, it does not meet the criteria for an AI Incident or AI Hazard but provides important complementary information about AI use in a sensitive government context.
Thumbnail Image

Why Palantir Technologies Stock Slumped Today

2026-01-29
The Motley Fool
Why's our monitor labelling this an incident or hazard?
While the article mentions an AI system used by ICE, it focuses on stock market reactions and public sentiment rather than any actual or potential harm caused by the AI system. There is no indication that the AI system's development, use, or malfunction has led or could plausibly lead to injury, rights violations, disruption, or other harms as defined. Therefore, this is not an AI Incident or AI Hazard. The article provides context and background information relevant to AI's role in government operations and market perceptions, fitting the definition of Complementary Information.
Thumbnail Image

Who takes Palantir's money? A new tracker finds out.

2026-01-29
Mother Jones
Why's our monitor labelling this an incident or hazard?
Palantir's AI systems are explicitly mentioned as being used by ICE for surveillance and targeting in immigration enforcement, which constitutes a violation of human rights and harm to communities. The article details realized harms from these AI systems' deployment, including profiling and potential wrongful deportations. The political donations and lobbying further entrench these harms. Hence, this qualifies as an AI Incident due to direct and indirect harm caused by the AI system's use.
Thumbnail Image

Palantir Employees Express Disgust Over Company's Part in Helping ICE

2026-01-29
International Business Times UK
Why's our monitor labelling this an incident or hazard?
Palantir's AI systems are explicitly mentioned as being used by ICE to support enforcement operations that have resulted in a fatality and other alleged harms such as wrongful detentions and racial profiling. The AI tools assist in processing tips and identifying enforcement targets, which directly influence ICE actions. The fatal shooting and ongoing controversies demonstrate realized harm linked to the AI system's use. The internal employee outrage and public criticism further confirm the significance of the harm. Hence, the event meets the criteria for an AI Incident because the AI system's use has indirectly led to injury and harm to persons and communities.
Thumbnail Image

ICE is using AI from Palantir and OpenAI for immigration enforcement, critics warn of surveillance overreach - Tech Startups

2026-01-29
Tech News | Startups News
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions multiple AI systems used by ICE, including generative AI and data integration platforms that automate enforcement decisions. These systems process sensitive personal data, including Medicaid records, to identify and target individuals for deportation, which constitutes a violation of human rights and privacy. The involvement of AI in these enforcement actions has directly led to harms such as surveillance overreach and potential wrongful targeting, as highlighted by civil liberties groups and legal actions. The harms are realized and ongoing, not merely potential, fulfilling the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

L'ICE sta usando un'app creata da Palantir per scovare e espellere gli immigrati

2026-01-27
La Repubblica.it
Why's our monitor labelling this an incident or hazard?
The presence of an AI system is explicit: the ELITE app uses AI to analyze integrated data and generate leads for enforcement. The AI system's use directly leads to harm by enabling ICE to identify and target individuals for deportation, which involves violations of human rights and privacy. The article details how the AI system's outputs are used operationally to locate and expel people, constituting realized harm. The involvement of AI in this process is central and pivotal, meeting the criteria for an AI Incident rather than a hazard or complementary information. The harm is not speculative or potential but ongoing and concrete, including privacy violations and repression of vulnerable populations.
Thumbnail Image

le squadracce di trump sono armate dal 'cavaliere nero' della tecnodestra - l'ice sta usando un'app

2026-01-28
DAGOSPIA
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Palantir's ELITE software) used by ICE to identify and target individuals for deportation. The system's use of sensitive data and predictive scoring to map and prioritize targets has directly led to harm, including a fatal incident and widespread civil rights concerns. This constitutes a violation of human rights and harm to communities, fitting the definition of an AI Incident. The article details realized harm and the AI system's pivotal role in causing it, rather than just potential harm or complementary information.
Thumbnail Image

Palantir aiuta l'Ice a rintracciare gli immigrati, mentre Meta censura i post sugli agenti federali | MilanoFinanza News

2026-01-28
Milano Finanza
Why's our monitor labelling this an incident or hazard?
Palantir's software is explicitly described as an AI system that processes and infers from complex data to generate outputs influencing ICE's enforcement actions. The system's use has directly led to harm in the form of privacy violations, potential human rights infringements, and social harm to immigrant communities. This fits the definition of an AI Incident because the AI system's use has directly led to violations of human rights and harm to communities. The Meta censorship is a complementary governance action and does not change the classification. Hence, the event is best classified as an AI Incident.
Thumbnail Image

Non solo Minneapolis, i sistemi di Palantir nelle operazioni dell'Ice

2026-01-26
Startmag
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems developed by Palantir used by ICE for surveillance, identification, and operational management in immigration enforcement. These systems influence decisions and actions that have led to the deaths of civilians and widespread protests, indicating harm to persons and communities. The AI's role is pivotal in enabling these operations through data aggregation, risk scoring, and real-time tracking. The harms described include violations of human rights and harm to communities, fitting the criteria for an AI Incident. Although the article does not attribute direct causation solely to the AI, the AI systems are integral to the enforcement processes that have caused harm, thus meeting the definition of an AI Incident.
Thumbnail Image

Usa, ICE utilizza app creata da Palantir per scovare gli immigrati aggirando la legge sulle intercettazioni, come anticipato dal GdI

2026-01-28
ilgiornaleditalia.it
Why's our monitor labelling this an incident or hazard?
The described system is an AI system as it uses advanced data integration, predictive analytics, and geospatial mapping to generate outputs influencing enforcement operations. Its use directly leads to violations of human rights and privacy, as it circumvents legal safeguards and enables intrusive surveillance and targeting of vulnerable populations. The harm is realized and ongoing, including breaches of privacy rights and potential repression, fitting the definition of an AI Incident. The involvement of AI in the development and use of this system is explicit and central to the harm caused.
Thumbnail Image

La pericolosa dipendenza del governo inglese dall'americana Palantir

2026-01-30
Il Foglio
Why's our monitor labelling this an incident or hazard?
Palantir's AI systems are explicitly involved in processing and managing sensitive data for UK defense and nuclear programs, which are critical infrastructures. The article does not report any realized harm but emphasizes the significant risk and vulnerability this dependency creates, especially given geopolitical tensions and legal mechanisms that could compel data access by foreign authorities. The concerns raised by experts and parliamentarians about Palantir being a 'vector of malign influence' and the potential for misuse of AI-driven surveillance and data analytics justify classifying this as an AI Hazard. There is no indication of an actual incident causing harm yet, so it is not an AI Incident. The article is not merely complementary information because it focuses on the risk and vulnerability itself rather than responses or updates. It is not unrelated as it clearly involves AI systems and their implications.
Thumbnail Image

La Silicon Valley si schiera contro l'Ice. Ma Palantir & C. ci ricavano 22 miliardi

2026-01-30
lastampa.it
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Palantir's Elite app) used by ICE to locate and arrest undocumented immigrants, which is directly linked to police violence and deaths. This constitutes a violation of human rights and harm to communities, fulfilling the criteria for an AI Incident. The involvement of AI in integrating and analyzing data to facilitate these actions is clear, and the harm is realized, not just potential. Hence, the event is classified as an AI Incident rather than a hazard or complementary information.