ICE Deploys AI-Powered Facial Recognition App for Field Identification

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

U.S. Immigration and Customs Enforcement (ICE) has deployed the Mobile Fortify app, which uses AI-driven facial recognition and fingerprint biometrics to identify individuals in real time. Originally intended for border use, the technology is now used domestically, raising concerns over privacy violations, wrongful arrests, and human rights abuses due to unreliable matches and lack of oversight.[AI generated]

Why's our monitor labelling this an incident or hazard?

The Mobile Fortify app is an AI system using facial recognition and fingerprint biometrics to identify people. Its deployment by ICE for real-time identification in the field directly involves the use of AI. The report highlights concerns about the unreliability of face recognition technology causing false matches and wrongful arrests, which are harms to individuals' rights and communities. The use of this technology without proper legal authorization and oversight further supports the classification as an AI Incident due to violations of rights and potential harm. Hence, the event meets the criteria for an AI Incident rather than a hazard or complementary information.[AI generated]
AI principles
AccountabilityFairnessHuman wellbeingPrivacy & data governanceRespect of human rightsRobustness & digital securitySafetyTransparency & explainabilityDemocracy & human autonomy

Industries
Government, security, and defenceDigital security

Affected stakeholders
General public

Harm types
Human or fundamental rightsPsychologicalReputationalPublic interestEconomic/Property

Severity
AI incident

Business function:
Compliance and justice

AI system task:
Recognition/object detection


Articles about this incident or hazard

Thumbnail Image

Trump channels Xi's China surveillance playbook as ICE deploys facial recognition app to identify people

2025-06-27
Economic Times
Why's our monitor labelling this an incident or hazard?
The Mobile Fortify app is an AI system using facial recognition and fingerprint biometrics to identify people. Its deployment by ICE for real-time identification in the field directly involves the use of AI. The report highlights concerns about the unreliability of face recognition technology causing false matches and wrongful arrests, which are harms to individuals' rights and communities. The use of this technology without proper legal authorization and oversight further supports the classification as an AI Incident due to violations of rights and potential harm. Hence, the event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Are ICE agents using facial recognition phone app? What we know

2025-06-27
Newsweek
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (facial recognition and fingerprint biometric app) in a law enforcement context. Although no direct harm or incident is reported, the deployment of such AI technology in immigration enforcement carries credible risks of harm, including violations of rights and privacy. Therefore, this qualifies as an AI Hazard because the AI system's use could plausibly lead to an AI Incident involving harm to individuals or groups. The article does not describe a realized harm or incident, nor does it focus on responses or updates to prior incidents, so it is not an AI Incident or Complementary Information. It is not unrelated because it clearly involves AI systems and their use in a sensitive context.
Thumbnail Image

How DHS facial recognition tech spread to ICE enforcement

2025-06-27
Reason
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems (facial recognition and biometric identification) being used by ICE to identify individuals for enforcement actions. The technology's unreliability and false matches have already resulted in wrongful arrests, constituting direct harm to individuals' rights and liberties. The use of these AI systems in this manner breaches fundamental rights and legal protections, fulfilling the criteria for an AI Incident. The harms are realized, not just potential, and involve violations of human rights and harm to communities.
Thumbnail Image

New ICE mobile app pushes biometric policing onto American streets | Biometric Update

2025-06-26
Biometric Update
Why's our monitor labelling this an incident or hazard?
The Mobile Fortify app is an AI system employing facial recognition and biometric matching algorithms integrated with large biometric databases. Its deployment and use by ICE agents have directly led to privacy violations, potential misidentifications, and constitutional concerns, which are harms to human rights and communities. The article details actual use cases and security lapses that have already occurred, not just potential risks, confirming realized harm. The AI system's role is pivotal in enabling this invasive biometric policing. Hence, the event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

ICE Is Using a New Facial Recognition App to Identify People, Leaked Emails Show

2025-06-26
404 Media
Why's our monitor labelling this an incident or hazard?
The Mobile Fortify app uses AI-based facial recognition and fingerprint matching systems to identify people, which is explicitly described. The use of this AI system by ICE in the field for enforcement purposes directly affects individuals' rights and privacy, constituting a violation of human rights and fundamental rights protections. The article details actual use and deployment, not just potential risks, indicating realized harm. The AI system's role is pivotal in enabling identification and enforcement actions that impact individuals and communities, meeting the criteria for an AI Incident under the OECD framework.
Thumbnail Image

ICE's Shiny New 'AI' Facial Recognition App: False Positives Ahoy!

2025-06-30
Security Boulevard
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Mobile Fortify) using facial recognition technology to identify individuals, which is an AI system by definition. The use of this system by ICE agents has directly led to harms including false positives and wrongful identification, which can cause injury or harm to persons and violations of rights. The concerns about minimal oversight and the system's unreliability further support that harm is occurring or is very likely occurring. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Una app secreta con IA para deportar inmigrantes: así sería la polémica tecnología que estaría utilizando EEUU

2025-06-27
El Español
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Mobile Fortify) used by ICE agents for biometric identification and risk assessment through AI algorithms. The system's use directly leads to harm by enabling deportations and potentially violating fundamental rights. The involvement of AI in decision-making and identification processes that result in real-world consequences meets the criteria for an AI Incident, as the harm is realized and directly linked to the AI system's use.
Thumbnail Image

El ICE incorpora reconocimiento facial con IA para rastrear migrantes en Estados Unidos

2025-06-28
Semana.com Últimas Noticias de Colombia y el Mundo
Why's our monitor labelling this an incident or hazard?
The event involves the active use of an AI system (Mobile Fortify) for biometric identification in immigration enforcement. The AI system's use directly affects individuals' rights and freedoms, constituting a violation of human rights or legal protections. The article reports that the system is already in use, not just a potential future risk, and highlights ethical debates and concerns about transparency and rights guarantees. Therefore, this qualifies as an AI Incident due to realized harm related to rights violations and surveillance.
Thumbnail Image

Estados Unidos está usando una app secreta con IA para deportar inmigrantes

2025-06-26
Hipertextual
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Mobile Fortify) employing facial recognition and AI algorithms to identify immigrants for deportation. The system's outputs are used by ICE agents to detain and deport individuals, which directly leads to harm (arbitrary detention, violation of rights, harm to communities). The use of AI in this context is central to the incident, and the harms described are realized, not hypothetical. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

El Servicio de Inmigración de Estados Unidos tiene un "truco" para identificar a las personas: una app de reconocimiento facial

2025-06-30
Urban Tecno
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system (facial recognition app Mobile Fortify) by ICE agents to identify people for deportation. The system's unreliability causes false matches, leading to wrongful arrests, which is a direct harm to individuals' rights and well-being. This meets the definition of an AI Incident because the AI system's use has directly led to violations of human rights and harm to persons. The concerns raised by the ACLU and the evidence of wrongful arrests confirm the realized harm rather than just potential risk.
Thumbnail Image

Reconocimiento facial en redadas migratorias genera alarma por su uso discrecional

2025-06-30
RPP noticias
Why's our monitor labelling this an incident or hazard?
Mobile Fortify is an AI system performing biometric recognition and matching using AI algorithms. Its use in immigration raids by ICE directly affects individuals' rights and privacy, with concerns about indiscriminate and unregulated use. This constitutes a violation of human rights and legal protections, fulfilling the criteria for an AI Incident. The article reports active use and associated harms or risks, not just potential future harm or complementary information, so the classification is AI Incident.