UK Police Secretly Use AI Facial Recognition on Passport and Immigration Databases

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

UK police have secretly conducted hundreds of AI-driven facial recognition searches on passport and immigration photo databases since 2020, without public or parliamentary oversight. Privacy advocates warn this mass surveillance violates privacy rights, risks wrongful identification, and erodes civil liberties, prompting calls for a moratorium and legal review.[AI generated]

Why's our monitor labelling this an incident or hazard?

The use of retrospective facial recognition technology involves AI systems that analyze biometric data to identify individuals. The police's use of the Home Office's passport photo database for facial recognition searches directly involves AI system use. The event highlights a breach of privacy rights and risks to civil liberties, which constitute violations of human rights under applicable law. The increase in searches and lack of transparency about data use indicate realized harm to individuals' privacy and trust, qualifying this as an AI Incident.[AI generated]
AI principles
Privacy & data governanceRespect of human rightsTransparency & explainabilityAccountabilityDemocracy & human autonomy

Industries
Government, security, and defence

Affected stakeholders
General public

Harm types
Human or fundamental rightsPublic interest

Severity
AI incident

Business function:
Compliance and justice

AI system task:
Recognition/object detection


Articles about this incident or hazard

Thumbnail Image

Police raid passport photo data in 'historic breach of privacy'

2025-08-07
The Telegraph
Why's our monitor labelling this an incident or hazard?
The use of retrospective facial recognition technology involves AI systems that analyze biometric data to identify individuals. The police's use of the Home Office's passport photo database for facial recognition searches directly involves AI system use. The event highlights a breach of privacy rights and risks to civil liberties, which constitute violations of human rights under applicable law. The increase in searches and lack of transparency about data use indicate realized harm to individuals' privacy and trust, qualifying this as an AI Incident.
Thumbnail Image

UK passport database images used in facial recognition scans

2025-08-08
TheRegister.com
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems in the form of facial recognition technology used by police forces. The use of these AI systems to scan millions of passport and immigration photos without transparency constitutes a violation of privacy rights and risks misidentification, which is a harm to individuals and communities. The event involves the use of AI systems leading to realized or ongoing harm (privacy breaches, potential injustice), thus qualifying as an AI Incident. The concerns about lack of oversight and the scale of data usage further support this classification.
Thumbnail Image

Clandestine facial recognition searches of civil databases by UK police surge | Biometric Update

2025-08-07
Biometric Update
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems—facial recognition technology—used by UK police to search passport and immigration databases. The AI system's use has directly led to harms: wrongful detention of an individual due to a false match, privacy violations from unauthorized use of biometric data, and broader societal harm through surveillance without consent or oversight. These harms fall under violations of human rights and harm to communities. The event also highlights ongoing governance and policy gaps, but since harm has already occurred, this is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

UK Police Secretly Scan Faces Against Passport Databases Since 2021

2025-08-08
WebProNews
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of facial recognition AI systems by UK police to scan millions of passport and immigration photos, which involves AI system use. The lack of transparency and legal oversight, combined with the scale of surveillance and potential for biased outcomes, indicates direct harm to privacy and fundamental rights. The ongoing use since 2021 and the reported increase in scans confirm realized harm rather than just potential risk. Hence, this event meets the criteria for an AI Incident due to violations of rights and harm to communities through surveillance and privacy breaches.
Thumbnail Image

Revealed: "Skyrocketing" scale of UK police's Secret Facial Recognition Searches of Passport and Immigration Databases

2025-08-07
Privacy International
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the use of facial recognition AI systems by police to conduct mass searches of large biometric databases without legal or public oversight, leading to privacy violations and risks of wrongful identification. This constitutes a violation of fundamental rights and privacy, which falls under harm category (c) - violations of human rights or breach of obligations under applicable law. The AI system's use is central to the harm, and the harm is ongoing and realized, not merely potential. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

UK police use passport, immigration photos in facial recognition without public disclosure

2025-08-11
Computing
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (facial recognition technology) used by police to scan large government-held image databases. The use of these AI systems has directly led to privacy violations and risks of misidentification, which are harms to human rights and fundamental rights. The lack of public disclosure and parliamentary oversight exacerbates the harm by undermining democratic accountability. The event describes ongoing use and realized harm rather than a potential risk, so it is classified as an AI Incident rather than an AI Hazard or Complementary Information. The harms include violations of privacy rights and potential misidentification leading to injustice, fitting the definition of an AI Incident under violations of human rights and breach of obligations to protect fundamental rights.
Thumbnail Image

UK Police Using Passports For 'Secretive' Facial Recognition Searches

2025-08-11
Digit
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of facial recognition technology, which is an AI system, to search large government databases containing millions of photos. The searches are conducted without clear legal frameworks or public knowledge, raising serious concerns about privacy violations and potential misidentifications. The harms described include breaches of privacy and democratic principles, which fall under violations of human rights. Since these harms are occurring and directly linked to the use of AI facial recognition systems by police, the event meets the criteria for an AI Incident rather than a hazard or complementary information.