Facial Recognition AI Leads to False Arrests and Rights Violations in Policing

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Police use of facial recognition AI has resulted in multiple wrongful arrests, particularly of Black individuals, and widespread misidentifications, raising concerns about racial bias and civil rights violations. Despite these harms, law enforcement agencies in the UK and US continue expanding its use, prompting criticism from rights advocates.[AI generated]

Why's our monitor labelling this an incident or hazard?

Facial recognition technology is an AI system used by the police for identifying suspects. Its deployment has already led to privacy rights violations and breaches of equalities law as per a court ruling, indicating realized harm. The concerns about intrusive surveillance and chilling effects on protest rights further confirm harm to communities and fundamental rights. Therefore, this event qualifies as an AI Incident due to the direct involvement of an AI system causing violations of human rights and harm to communities.[AI generated]
AI principles
AccountabilityFairnessHuman wellbeingPrivacy & data governanceRespect of human rightsRobustness & digital securitySafetyTransparency & explainabilityDemocracy & human autonomy

Industries
Government, security, and defence

Affected stakeholders
Other

Harm types
Human or fundamental rightsPsychologicalReputationalEconomic/PropertyPublic interest

Severity
AI incident

Business function:
Compliance and justice

AI system task:
Recognition/object detection


Articles about this incident or hazard

Thumbnail Image

Facial recognition could transform policing in way DNA testing did, says Met chief

2023-09-11
The Guardian
Why's our monitor labelling this an incident or hazard?
Facial recognition technology is an AI system used by the police for identifying suspects. Its deployment has already led to privacy rights violations and breaches of equalities law as per a court ruling, indicating realized harm. The concerns about intrusive surveillance and chilling effects on protest rights further confirm harm to communities and fundamental rights. Therefore, this event qualifies as an AI Incident due to the direct involvement of an AI system causing violations of human rights and harm to communities.
Thumbnail Image

Facial recognition will be as big as DNA in criminal investigations

2023-09-11
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
Facial recognition technology qualifies as an AI system due to its use of automated image analysis and matching against databases. The article discusses its current use in identifying suspects, which is a direct use of AI. However, no specific incident of harm or violation is reported; the concerns raised are about potential intrusive surveillance and the need for reform, which are warnings of plausible future harm rather than documented incidents. Therefore, this event is best classified as Complementary Information, as it provides context on the deployment and societal/governance responses to AI use in policing without describing a concrete AI Incident or AI Hazard.
Thumbnail Image

Why police use of facial recognition risks miscarriages of justice

2023-09-14
The Independent
Why's our monitor labelling this an incident or hazard?
The event involves the use of facial recognition AI systems by police departments, which have directly caused harm to individuals through false arrests and violations of civil rights. These harms include wrongful imprisonment, emotional distress, and potential long-term impacts on individuals' lives and careers. The article provides concrete examples of these harms occurring, meeting the criteria for an AI Incident. The involvement of AI is explicit and central to the harm described, and the harms are realized rather than potential.
Thumbnail Image

Facial recognition will 'transform investigative work,' says UK's top cop | Biometric Update

2023-09-13
Biometric Update
Why's our monitor labelling this an incident or hazard?
Facial recognition is an AI system used by police forces as described. The article reports that 87% of alerts were misidentifications, which implies harm through wrongful suspicion or privacy violations, constituting a breach of rights. The use of this technology in public spaces and its expansion despite criticism and watchdog warnings shows that harms are occurring or have occurred. This fits the definition of an AI Incident because the AI system's use has directly or indirectly led to violations of human rights and harm to communities. The article does not merely discuss potential future harm or governance responses but reports ongoing use with documented issues, confirming realized harm.
Thumbnail Image

Facial Recognition Technology and False Arrests: Should Black ... - Capital B

2023-09-14
Business Telegraph
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system—facial recognition software—whose malfunction or biased performance has directly led to wrongful arrests, constituting harm to individuals and violations of their rights. The article details realized harm (false arrests and jail time) caused by the AI system's inaccurate outputs, meeting the criteria for an AI Incident. The discussion of legal and policy responses supports the incident classification but does not overshadow the primary harm described.