CBP Deploys Clearview AI Facial Recognition, Raising Privacy and Rights Concerns

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

U.S. Customs and Border Protection (CBP) has signed contracts to integrate Clearview AI's facial recognition system into its intelligence and targeting operations. The deployment grants agents access to a vast database of scraped images, raising significant concerns about privacy violations, misidentification risks, and potential harm to individuals and communities in the United States.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves the use of an AI system (facial recognition technology by Clearview AI) in a government surveillance context. The system's deployment has directly led to harms including violations of privacy and human rights, as it enables mass surveillance without consent and with significant error rates that could cause wrongful actions against individuals. These harms fall under violations of human rights and harm to communities. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to realized harms related to privacy and rights violations.[AI generated]
AI principles
Privacy & data governanceRespect of human rights

Industries
Government, security, and defence

Affected stakeholders
General public

Harm types
Human or fundamental rights

Severity
AI incident

Business function:
Compliance and justice

AI system task:
Recognition/object detection


Articles about this incident or hazard

Thumbnail Image

CBP Signs Clearview AI Deal to Use Face Recognition for 'Tactical Targeting'

2026-02-11
Wired
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Clearview AI) used by CBP for facial recognition in enforcement and intelligence operations. The system's use involves processing sensitive biometric data and has known limitations, including high error rates that can cause false matches. The deployment raises concerns about privacy violations, lack of transparency, and potential misuse affecting U.S. citizens. However, the article does not report a specific incident of harm occurring yet, only the signing of the contract and potential risks. Thus, the event is best classified as an AI Hazard, reflecting the plausible future harm from the AI system's use in this context.
Thumbnail Image

End of anonymity: How facial recognition is redefining public privacy

2026-02-11
WRAL
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (facial recognition software using AI to identify individuals from images). It describes the use of these systems by law enforcement and companies, which has led to identification of suspects and law enforcement actions, implicating privacy and human rights concerns. However, it does not describe a specific event where the AI system's use or malfunction directly or indirectly caused harm such as injury, rights violations, or property/community harm. Instead, it discusses the broader implications, societal concerns, and regulatory responses to the technology. This aligns with the definition of Complementary Information, which provides supporting data and context about AI systems and their impacts without reporting a new incident or hazard.
Thumbnail Image

Your Vacation Selfies Feed CBP Spy Net: $225K Clearview Deal Unlocks 60B Images!

2026-02-12
Gadget Review
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (facial recognition technology by Clearview AI) in a government surveillance context. The system's deployment has directly led to harms including violations of privacy and human rights, as it enables mass surveillance without consent and with significant error rates that could cause wrongful actions against individuals. These harms fall under violations of human rights and harm to communities. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to realized harms related to privacy and rights violations.
Thumbnail Image

CBP embeds Clearview AI into tactical targeting operations | Biometric Update

2026-02-12
Biometric Update
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Clearview AI's facial recognition platform) integrated into CBP's operations, fulfilling the AI System criterion. The use of this system in tactical targeting and intelligence workflows constitutes use of the AI system. While no direct harm or incident is reported, the article raises credible concerns about privacy compliance and the potential for violations of privacy and fundamental rights due to the use of biometric data without clear privacy impact assessments or transparency. This situation represents a plausible risk of harm related to privacy and rights violations if the deployment proceeds without proper safeguards. Therefore, the event fits the definition of an AI Hazard, as the AI system's use could plausibly lead to violations of rights and harms related to privacy and data protection. It is not an AI Incident because no actual harm or violation has been documented yet. It is not Complementary Information because the article is not primarily about responses or updates to a prior incident but about the deployment itself and its compliance uncertainties. It is not Unrelated because the event clearly involves an AI system and its potential impacts.
Thumbnail Image

Behind the Badge and the Algorithm: How CBP's New Clearview AI Deal Signals a Turning Point for Federal Facial Recognition

2026-02-11
WebProNews
Why's our monitor labelling this an incident or hazard?
Clearview AI's facial recognition system is an AI system explicitly mentioned and used operationally by CBP for tactical targeting. The article details how this use implicates privacy rights, due process, and risks of misidentification, which are violations of human rights and harm to communities. The deployment is active and ongoing, not hypothetical, and the harms are either occurring or highly plausible given documented biases and legal challenges. Thus, the event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Behind the Badge and the Algorithm: CBP's Sweeping New Clearview AI Contract Signals a New Era in Federal Surveillance

2026-02-13
WebProNews
Why's our monitor labelling this an incident or hazard?
Clearview AI's facial recognition system is an AI system as it uses an algorithm to match faces from a vast database to identify individuals. The CBP's contract enables the use of this AI system in real-time tactical targeting, which is a use case involving the AI system's deployment. The article documents direct harms such as privacy violations, potential wrongful arrests, and systemic bias affecting vulnerable populations, all linked to the AI system's use. These harms fall under violations of human rights and harm to communities. The lack of regulatory oversight and the expansion of this technology's use in law enforcement further underscore the significance of the harm. Hence, the event meets the criteria for an AI Incident rather than a hazard or complementary information.