Merseyside Police Deploy Live Facial Recognition in Liverpool, Raising Rights Concerns

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Merseyside Police have begun deploying live facial recognition cameras in Liverpool city centre to identify wanted individuals and prevent crime. The AI system's use has sparked concerns over privacy, racial bias, and potential human rights violations, particularly affecting marginalized communities. Critics highlight risks of mass surveillance and discrimination.[AI generated]

Why's our monitor labelling this an incident or hazard?

Facial recognition technology is an AI system that processes live video feeds to identify individuals by matching faces to a watchlist. Its deployment by police for identifying suspects directly involves AI use. The article describes actual use cases where suspects were identified and arrested, indicating realized impact. The use of such technology raises concerns about violations of human rights, privacy, and potential misuse or errors, which are recognized harms under the AI Incident definition. Although the police emphasize safeguards, the deployment itself constitutes an event where AI use has led to significant impacts on individuals, fitting the criteria for an AI Incident rather than a hazard or complementary information.[AI generated]
AI principles
Privacy & data governanceFairnessRespect of human rightsTransparency & explainabilityDemocracy & human autonomyAccountability

Industries
Government, security, and defence

Affected stakeholders
General public

Harm types
Human or fundamental rights

Severity
AI incident

Business function:
Compliance and justice

AI system task:
Recognition/object detection


Articles about this incident or hazard

Thumbnail Image

Facial recognition cameras rolled out by Merseyside Police

2025-12-09
BBC
Why's our monitor labelling this an incident or hazard?
Facial recognition technology is an AI system that processes live camera feeds to identify individuals. While the police state that only people on a watchlist are identified and final decisions are made by officers, the technology's use in public spaces inherently carries risks of privacy violations and potential misuse. The article does not report any realized harm or incidents but highlights the rollout of the system, implying plausible future risks. Hence, it fits the definition of an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

Facial recognition cameras rolled out by Merseyside Police

2025-12-09
BBC
Why's our monitor labelling this an incident or hazard?
Facial recognition technology is an AI system that processes live video feeds to identify individuals by matching faces to a watchlist. Its deployment by police for identifying suspects directly involves AI use. The article describes actual use cases where suspects were identified and arrested, indicating realized impact. The use of such technology raises concerns about violations of human rights, privacy, and potential misuse or errors, which are recognized harms under the AI Incident definition. Although the police emphasize safeguards, the deployment itself constitutes an event where AI use has led to significant impacts on individuals, fitting the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Police facial recognition cameras to be installed in Liverpool city centre - Liverpool Echo

2025-12-09
Liverpool Echo
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (facial recognition) for law enforcement purposes, which can plausibly lead to harms such as privacy violations, racial bias, or wrongful identification. However, the article does not report any actual incidents of harm, misuse, or malfunction. The focus is on the deployment and the safeguards in place, as well as public concerns. Therefore, this qualifies as an AI Hazard because the technology's use could plausibly lead to harms, but no harm has yet been documented in this deployment.
Thumbnail Image

Liverpool City centre is about to go full Big Brother

2025-12-11
The Canary
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (Live Facial Recognition) whose deployment is directly linked to harms such as racial bias and discrimination, violation of privacy rights, and suppression of free speech and protest rights. These harms fall under violations of human rights and harm to communities. The system's flawed and biased operation is already causing or will cause direct harm to individuals and groups, especially marginalized communities. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information, as the harms are occurring or imminent due to the system's use.
Thumbnail Image

Police roll out live facial recognition in Merseyside

2025-12-09
Wirral Globe
Why's our monitor labelling this an incident or hazard?
The article clearly involves an AI system (Live Facial Recognition) used by law enforcement. However, it does not describe any actual harm or incidents caused by the AI system's use, malfunction, or development. The focus is on the deployment plan, safeguards, and intended benefits, with no mention of realized injury, rights violations, or other harms. Therefore, this event represents a plausible future risk scenario where AI use could lead to harm, but no harm has yet occurred. This fits the definition of an AI Hazard rather than an AI Incident or Complementary Information, as it is not an update on a past incident but a new deployment with potential risks.
Thumbnail Image

Police to use facial recognition cameras in Liverpool

2025-12-09
St Helens Star
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (live facial recognition) actively used by police to identify individuals, which directly affects people's privacy and rights. The use of biometric data and real-time identification by law enforcement is a clear example of AI system use leading to potential violations of human rights. Although the police claim safeguards, the deployment itself is an event where AI use has led to or is leading to harm in terms of privacy and rights. Therefore, it meets the criteria for an AI Incident rather than a hazard or complementary information.