UK Police Expand Use of AI Facial Recognition Vans Amid Bias Concerns

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Seven UK police forces, including West Yorkshire, Greater Manchester, and Sussex, are deploying AI-powered facial recognition vans to identify suspects. Despite claims of low false alert rates, civil liberties and anti-racism groups warn of the technology's history of inaccuracies and racial bias, raising concerns about potential rights violations.[AI generated]

Why's our monitor labelling this an incident or hazard?

The vans use AI-based facial recognition software, so an AI system is involved. The event concerns the use of this AI system in public surveillance, which could plausibly lead to harms such as violations of privacy rights, racial bias, or other human rights issues. Since no actual harm or incident is reported, but there is credible concern about potential harms, this qualifies as an AI Hazard rather than an AI Incident. The opposition and concerns highlight the plausible risk of harm from the deployment of this technology.[AI generated]
AI principles
FairnessPrivacy & data governanceRespect of human rightsTransparency & explainabilityAccountability

Industries
Government, security, and defence

Affected stakeholders
General public

Harm types
Human or fundamental rights

Severity
AI hazard

Business function:
Compliance and justice

AI system task:
Recognition/object detection


Articles about this incident or hazard

Thumbnail Image

Live facial recognition vans launched in Sussex and Surrey

2025-11-13
BBC
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (facial recognition software) in active deployment for law enforcement purposes. The system's use directly affects individuals by identifying them in public spaces and triggering police interventions, which implicates privacy rights and potential racial bias concerns. These issues constitute violations of human rights and fundamental rights, fulfilling the criteria for harm under the AI Incident definition. Since the AI system's use has already begun and is operational, and concerns about bias and intrusiveness are materialized, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Live facial recognition vans launched in Sussex and Surrey

2025-11-13
BBC
Why's our monitor labelling this an incident or hazard?
The vans use AI-based facial recognition software, so an AI system is involved. The event concerns the use of this AI system in public surveillance, which could plausibly lead to harms such as violations of privacy rights, racial bias, or other human rights issues. Since no actual harm or incident is reported, but there is credible concern about potential harms, this qualifies as an AI Hazard rather than an AI Incident. The opposition and concerns highlight the plausible risk of harm from the deployment of this technology.
Thumbnail Image

New facial recognition vans rolled out for use by seven more police...

2025-11-13
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (facial recognition software) used by police forces, which fits the definition of an AI system. The deployment and use of this system could plausibly lead to harms such as violations of human rights (privacy, potential racial bias) and community harm. However, the article does not describe any actual harm or incident resulting from the AI system's use, only past criticisms and ongoing concerns. Thus, it does not meet the threshold for an AI Incident. Instead, it fits the definition of an AI Hazard, as the use of this technology could plausibly lead to incidents involving harm or rights violations. The article also includes some mitigation and transparency measures but these do not negate the potential risk. Therefore, the classification is AI Hazard.
Thumbnail Image

Facial recognition vans 'not about surveillance'

2025-11-13
Yahoo
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (live facial recognition software) in active deployment by law enforcement, which directly affects individuals through identification and potential police action. The concerns about racial bias and privacy rights indicate possible violations of human rights or fundamental rights, which are harms under the AI Incident definition. Since the system is already in use and has led to alerts and interactions with people, the harms are realized or ongoing rather than merely potential. Therefore, this qualifies as an AI Incident due to the direct involvement of AI in activities that raise human rights concerns and potential harm to individuals and communities.
Thumbnail Image

Facial recognition vans to be rolled out in new areas

2025-11-13
The Independent
Why's our monitor labelling this an incident or hazard?
The facial recognition vans clearly involve AI systems performing real-time identification tasks. The criticisms about racial bias and inaccuracies indicate potential violations of human rights and harm to communities, which are recognized harms under the framework. However, the article does not describe any actual harm or incidents resulting from the use of these vans, only the potential for such harm based on the system's known limitations and history. Thus, this event fits the definition of an AI Hazard, as the deployment could plausibly lead to an AI Incident involving rights violations or community harm, but no direct or indirect harm has been documented in this report.
Thumbnail Image

'Undoubtedly a risk': Sussex Police defend new facial recognition vans

2025-11-13
ITV Hub
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system (live facial recognition) in policing, which involves AI system use. However, it does not report any direct or indirect harm caused by the system, such as violations of rights, injury, or community harm. The police acknowledge risks and improvements but no incident of harm is described. The focus is on explaining the technology, its deployment, and addressing concerns, which fits the definition of Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

New facial recognition vans rolled out for use by seven more police forces

2025-11-13
Express & Star
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system—facial recognition software used by police vans. The use of this AI system is ongoing and expanding. Although there is no direct report of harm (such as wrongful arrests or violations), the documented history of inaccuracies and racial bias indicates a plausible risk of harm to human rights and potential violations of privacy and discrimination. The police acknowledge limitations and have taken steps to improve the system and transparency, but the concerns remain. Since no actual harm is described, but plausible future harm is credible, this event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Surrey Police's live facial recognition van coming soon to your town centre

2025-11-13
Surrey Advertiser Online
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as live facial recognition software used by police. The system's use is intended to identify individuals in public spaces, which implicates privacy and potential bias concerns. While the article discusses safeguards and improvements, it does not report any realized harm or incidents resulting from the system's deployment. However, the nature of live facial recognition technology and its application in policing plausibly could lead to violations of human rights or harm to communities, such as misidentification or discriminatory impacts. Thus, the event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

New facial recognition vans rolled out for use by seven more police forces

2025-11-13
The Irish News
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (facial recognition software) used by police forces. The system's use is described, including its capabilities and deployment. While there are concerns about inaccuracies and racial bias, no concrete harm or incident is reported. The police report a very low false positive rate and emphasize transparency measures. Since no actual harm or rights violations are documented, but the technology's nature and history suggest plausible future harm, the event fits the definition of an AI Hazard rather than an AI Incident. It is not merely complementary information because the main focus is on the deployment and potential risks, not on responses or updates to past incidents. It is not unrelated because the AI system and its societal implications are central to the report.
Thumbnail Image

New facial recognition vans rolled out for use by seven more police forces

2025-11-13
Shropshire Star
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system (facial recognition software) by police forces, fulfilling the AI system involvement criterion. However, it does not describe any actual harm occurring from the system's use, only concerns and criticisms about potential inaccuracies and bias. The police report improved accuracy and transparency measures, indicating ongoing governance and mitigation efforts. Since no direct or indirect harm has been reported, and the article mainly provides updates on deployment and responses to past criticisms, it fits the definition of Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

Police defend new facial recognition camera vans over racial bias claims - Yorkshire Live

2025-11-13
huddersfieldexaminer
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (facial recognition software) in active deployment by police forces, which has a documented history of racial bias and inaccuracies that could plausibly lead to violations of human rights and discriminatory harm. However, the article does not describe any specific incident where harm has occurred; rather, it focuses on the deployment, the police's defense of the technology, and the concerns raised by advocacy groups. Therefore, this situation represents a plausible risk of harm rather than a realized harm. It fits the definition of an AI Hazard because the AI system's use could plausibly lead to an AI Incident involving violations of rights or harm to communities, but no such incident is reported as having occurred yet.
Thumbnail Image

Facial recognition vans rolled out by police in Yorkshire

2025-11-13
Yorkshire Post
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (facial recognition software) used by police vans. While there are acknowledged concerns about inaccuracies and racial bias, no specific harm or violation has been reported as having occurred in this deployment. The focus is on the rollout, improvements, and transparency, which aligns with providing complementary information about the AI system's use and governance. Hence, this event fits best as Complementary Information rather than an Incident or Hazard.
Thumbnail Image

Police launch live facial recognition vans across Sussex and Surrey

2025-11-13
Sussex Express
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (live facial recognition) actively used by law enforcement, which fits the definition of an AI system. The use is operational, not developmental or malfunction-related. No direct or indirect harm has been reported so far, but the technology's deployment could plausibly lead to harms such as violations of privacy rights or wrongful identification, which are human rights concerns. Therefore, this event qualifies as an AI Hazard because it plausibly could lead to an AI Incident involving human rights violations, but no such incident has yet occurred or been reported in the article.
Thumbnail Image

New facial recognition vans rolled out for use by seven more police forces

2025-11-13
Western Telegraph
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system (facial recognition software) in police vans, which is being expanded to more forces. While there are documented concerns about inaccuracies and racial bias, no specific harm or rights violations are reported as having occurred in this rollout. The article mainly provides information about the deployment, performance data, and responses to criticism, which aligns with providing complementary information about an AI system's societal and governance context rather than reporting an incident or hazard. Hence, this event is best classified as Complementary Information.
Thumbnail Image

Live facial recognition vans spread across seven additional UK cities | Biometric Update

2025-11-16
Biometric Update
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (live facial recognition) in active law enforcement operations, which have directly led to arrests and interventions. The article highlights concerns about privacy violations, potential bias, and false alerts affecting individuals, including racial bias, which constitute violations of human rights and harm to communities. The AI system's deployment and use have caused realized harm, not just potential harm. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.