Facial Recognition AI in Policing Leads to False Identifications and Racial Bias Concerns

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Police use of AI-powered facial recognition technology has resulted in false identifications and disproportionately targeted non-white individuals, raising concerns about bias and rights violations. Despite limited success, authorities in New Orleans and the UK are expanding its use, prompting criticism from lawmakers and civil rights advocates over privacy and civil liberties.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly involves an AI system—facial recognition technology—used by law enforcement. The use of this AI system has directly led to harms: racial bias against Black individuals and ineffective policing outcomes, including false matches and wrongful targeting. These harms fall under violations of human rights and harm to communities. The article provides data showing the system's failures and biased application, confirming realized harm rather than potential harm. Therefore, this event qualifies as an AI Incident.[AI generated]
AI principles
FairnessPrivacy & data governanceRespect of human rightsTransparency & explainabilityAccountabilityRobustness & digital securitySafetyDemocracy & human autonomy

Industries
Government, security, and defence

Affected stakeholders
General public

Harm types
Human or fundamental rightsPsychologicalReputationalPublic interest

Severity
AI incident

Business function:
Compliance and justice

AI system task:
Recognition/object detection


Articles about this incident or hazard

Thumbnail Image

Ministers to encourage the police to deploy AI to tackle crimes

2023-10-29
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (facial recognition technology) being promoted for use by police to tackle crime, which involves AI system use. However, it does not describe any actual harm or incident caused by these AI systems, nor does it describe a credible risk of future harm from their deployment. Instead, it reports on government policy and planned increased use, which fits the definition of Complementary Information as it relates to governance and societal responses to AI. There is no direct or indirect harm reported, nor a plausible future harm event described. Hence, the classification is Complementary Information.
Thumbnail Image

Privacy bill fails to address dangers of facial recognition technology: coalition

2023-11-01
Yahoo Sports Canada
Why's our monitor labelling this an incident or hazard?
Facial recognition technology is an AI system that can lead to harms such as biased or false identifications, which threaten human rights and privacy. Although no specific harm has been reported as having occurred, the coalition's warning about the risks and the failure of legislation to mitigate them indicates a credible potential for harm. Therefore, this event qualifies as an AI Hazard because it concerns plausible future harms from the use of AI systems in facial recognition and the lack of adequate regulatory safeguards.
Thumbnail Image

'Wholly ineffective and pretty obviously racist': Inside New Orleans' struggle with facial-recognition policing

2023-10-31
POLITICO
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system—facial recognition technology—used by law enforcement. The use of this AI system has directly led to harms: racial bias against Black individuals and ineffective policing outcomes, including false matches and wrongful targeting. These harms fall under violations of human rights and harm to communities. The article provides data showing the system's failures and biased application, confirming realized harm rather than potential harm. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Why this New Orleans Democrat champions police use of facial recognition

2023-10-31
POLITICO
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (facial recognition technology) used by law enforcement. However, it does not describe any realized harm such as injury, rights violations, or property/community harm caused by the AI system. The low effectiveness and potential for bias are discussed, but no direct or indirect harm has been reported. The safeguards and oversight mechanisms are also detailed, indicating governance responses. Therefore, the article fits best as Complementary Information, providing context and updates on the use and governance of an AI system without reporting a new AI Incident or AI Hazard.
Thumbnail Image

'Computer got it wrong': Robert was locked up by AI for a crime he didn't commit

2023-10-31
Australian Broadcasting Corporation
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system—facial recognition technology—used by police to identify suspects. The AI system's flawed training data caused racial bias, leading to false identification and wrongful arrest of Robert Williams, which is a direct harm to his personal liberty and rights. The article documents realized harm (wrongful detention, legal consequences, emotional distress) and systemic issues with AI misuse in law enforcement. This fits the definition of an AI Incident because the AI system's malfunction and use directly caused harm to a person and violated fundamental rights.
Thumbnail Image

New Orleans police's facial recognition tool mostly used against Black suspects

2023-10-31
The Independent
Why's our monitor labelling this an incident or hazard?
Facial recognition software is an AI system used here for suspect identification. The article reports that the system is mostly used against Black suspects, showing racial bias, and is largely ineffective with many erroneous matches. This biased and inaccurate use of AI in policing leads to violations of human rights and harm to communities, fulfilling the criteria for an AI Incident. Although no false arrests are reported in this specific case, the systemic bias and errors represent realized harm. The involvement is through the use of the AI system by law enforcement, directly leading to discriminatory outcomes and ineffective policing.
Thumbnail Image

Police told to double use of facial recognition technology to nail criminals

2023-10-28
The Sun
Why's our monitor labelling this an incident or hazard?
Facial recognition technology is an AI system used by police to identify suspects, and its use has directly led to arrests, which can be considered harm or impact on individuals' rights and privacy. The deployment of such AI systems in policing raises concerns about violations of human rights and privacy, fitting the definition of an AI Incident due to the direct involvement of AI in law enforcement actions affecting individuals. The article reports actual use and outcomes (arrests), not just potential risks, so it qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

AI facial recognition technology will help police to catch more criminals

2023-10-30
The Telegraph
Why's our monitor labelling this an incident or hazard?
The event involves the use and development of AI facial recognition systems by police, which are AI systems as they perform real-time identification and matching tasks. However, the article does not describe any realized harm or incident resulting from these systems. Instead, it focuses on the benefits, governance, and plans for expansion. Therefore, it does not qualify as an AI Incident or AI Hazard. It provides contextual information about AI deployment and governance in policing, which fits the definition of Complementary Information.
Thumbnail Image

Police are now using AI to spot criminals

2023-10-28
EXPRESS
Why's our monitor labelling this an incident or hazard?
The police's use of AI facial recognition to identify suspects and catch criminals is a clear example of AI system use leading to direct impacts on individuals and communities, including the apprehension of offenders for serious crimes. This fits the definition of an AI Incident because the AI system's use has directly led to significant outcomes related to human rights enforcement and public safety. Although the article mentions transparency and legal basis, the deployment of such technology inherently involves risks of rights violations and societal impact, which are part of the harm scope. Therefore, this event is best classified as an AI Incident.
Thumbnail Image

UK police urged to double down on facial recognition

2023-10-30
The Next Web
Why's our monitor labelling this an incident or hazard?
The article discusses the use and planned expansion of AI facial recognition technology by UK police, which involves AI systems. However, it does not describe any realized harm or incident resulting from the AI system's use, nor does it report a near miss or credible imminent risk of harm. Instead, it presents a policy push and public debate around the technology's use and risks. Therefore, it does not meet the criteria for an AI Incident or AI Hazard. It is best classified as Complementary Information because it provides important context on governance, societal responses, and ongoing concerns related to AI surveillance technologies.
Thumbnail Image

UK police minister wants facial recognition use doubled

2023-10-31
theregister.com
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the use of AI systems (algorithmic-assisted facial recognition) in policing, which have directly led to arrests and identification of suspects, thus causing realized impacts on individuals. The harms include potential violations of privacy and rights, as well as documented bias concerns. The minister's call to double the use of these systems indicates ongoing and increasing deployment with direct consequences. Therefore, this event meets the criteria for an AI Incident due to the direct involvement of AI systems in causing harm or rights violations through their use in law enforcement.
Thumbnail Image

Privacy bill fails to address dangers of facial recognition technology: Coalition

2023-11-01
Toronto Sun
Why's our monitor labelling this an incident or hazard?
Facial recognition technology is an AI system that can lead to harms such as false identifications, which can cause violations of rights or harm to individuals. The article highlights the plausible future harms of such AI systems and the legislative failure to mitigate these risks. Since no actual harm or incident is reported, but credible risks are identified, this qualifies as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

More statements, legislation but little progress on facial recognition rules | Biometric Update

2023-10-31
Biometric Update
Why's our monitor labelling this an incident or hazard?
The article centers on legislative efforts and civil society advocacy regarding facial recognition AI, emphasizing the potential for harm to civil liberties and privacy if unregulated. However, it does not describe any concrete event where the AI system's use has directly or indirectly caused harm. The discussion of risks and the need for public debate aligns with the definition of an AI Hazard, as it plausibly could lead to harm in the future, but no actual harm is reported. Since the article mainly reports on ongoing policy discussions and advocacy without a specific incident or imminent hazard event, it is best classified as Complementary Information, providing context and updates on governance and societal responses to AI facial recognition issues.
Thumbnail Image

UK police minister calls for more live facial recognition | Biometric Update

2023-10-30
Biometric Update
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (live and retrospective facial recognition) by UK police, which have directly led to arrests and surveillance practices impacting individuals' privacy and potentially violating rights. The article documents actual deployments and outcomes, not just potential risks, fulfilling the criteria for an AI Incident. The concerns about misidentification and legal incoherence further support the classification as an incident due to realized harms and rights violations. Although there are calls for expanded use, the presence of actual arrests and surveillance harm takes precedence over potential future harms (hazards).
Thumbnail Image

Bill must rein in facial recognition: coalition

2023-11-01
Winnipeg Free Press
Why's our monitor labelling this an incident or hazard?
The article discusses the plausible risks and potential harms posed by facial recognition AI systems, such as biased results and privacy violations, but does not describe any realized harm or incident. The main focus is on advocacy for better regulation to prevent future harms, which fits the definition of an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

House Dems seek guardrails for law enforcement's use of facial recognition

2023-10-30
Nextgov
Why's our monitor labelling this an incident or hazard?
The article focuses on a bill proposing guardrails for facial recognition technology use by law enforcement, highlighting concerns about privacy, discrimination, and misuse. While it references known issues with AI facial recognition (such as bias and misidentification), it does not describe a realized harm or incident. The content is about governance and policy responses to potential AI harms, making it Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

Police Urged to Double AI-Enabled Facial Recognition Searches to Enhance Crime-Fighting Efforts

2023-10-31
International Business Times UK
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (facial recognition technology, AI tools for identifying child sexual abuse material) being used in policing. The AI systems are involved in the use phase, aiding law enforcement in identifying suspects and preventing crimes, which is a positive impact rather than harm. There is no indication that the AI systems have caused injury, rights violations, or other harms. The article emphasizes legal compliance, accuracy improvements, and transparency measures, indicating governance and risk management. It also discusses proactive government initiatives to address AI risks, such as combating AI-generated child sexual abuse images and hosting an AI Safety Summit. Since no harm or plausible harm from AI use is reported, and the focus is on the benefits and governance, the event is best classified as Complementary Information.
Thumbnail Image

Government urges police to step up facial recognition

2023-10-31
Computing
Why's our monitor labelling this an incident or hazard?
Facial recognition technology is an AI system used for biometric identification. The government's recommendation to significantly increase its use in policing, despite known issues such as wrongful apprehensions and regulatory warnings, presents a plausible risk of violations of human rights and privacy (harm category c). Since the article does not report new incidents of harm but discusses the potential for such harms due to increased deployment and existing controversies, this qualifies as an AI Hazard rather than an AI Incident. The concerns about unchecked surveillance and legal/ethical breaches support the classification as a hazard with plausible future harm.
Thumbnail Image

Privacy bill fails to address dangers of facial recognition technology: coalition

2023-11-01
Lethbridge News Now
Why's our monitor labelling this an incident or hazard?
The article focuses on warnings about the dangers of facial recognition technology and the need for better regulation, which implies plausible future harm but does not document an actual harm event. Therefore, it fits the definition of an AI Hazard, as the technology's use could plausibly lead to harms such as privacy violations, biased outcomes, and threats to fundamental rights if unregulated.
Thumbnail Image

Statewatch | International police facial recognition system: Parliament must ensure democratic debate

2023-10-31
Statewatch
Why's our monitor labelling this an incident or hazard?
The article centers on the proposed expansion of an AI-enabled police facial recognition system and the associated privacy and human rights concerns. While it clearly identifies plausible future harms related to the use of AI in law enforcement and biometric data sharing, it does not describe any actual incident or harm that has occurred due to the AI system. The concerns are about potential misuse, lack of proportionality, and democratic oversight, which align with the definition of an AI Hazard. However, since the article is primarily a call for debate and scrutiny rather than reporting a specific event or near miss, it fits best as Complementary Information providing context and governance-related discussion about AI risks rather than a direct AI Hazard or Incident.
Thumbnail Image

Facing criticism for its use in law enforcement, facial recognition technology has been a topic of concern for lawmakers and civil rights advocates. New

2023-10-31
Bollyinside - Breaking & latest News worldwide
Why's our monitor labelling this an incident or hazard?
Facial recognition technology is an AI system used for identification purposes. The article reports that its use by the police has led to false identifications and disproportionate targeting of a specific racial group, which constitutes a violation of rights and harm to communities. These outcomes are direct harms caused by the AI system's use, fulfilling the criteria for an AI Incident.