Concerns Over Police Scotland's Facial Recognition Plans

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Police Scotland's consideration of live facial recognition technology has raised concerns about potential bias and human rights violations. Chief Constable Jo Farrell advocates for its use, while experts and Scottish Liberal Democrats warn it could harm public relations and is not yet fit for deployment.[AI generated]

Why's our monitor labelling this an incident or hazard?

Live facial recognition is an AI system under consideration, and although no deployment or harms have yet occurred in Scotland, its use would plausibly lead to violations of privacy, potential civil rights breaches, and misidentification harms (false positives/negatives). The discussion is about preventing future risks rather than reporting a realized incident, making this an AI Hazard.[AI generated]
AI principles
FairnessPrivacy & data governanceRespect of human rightsTransparency & explainabilityRobustness & digital securityAccountabilityDemocracy & human autonomySafety

Industries
Government, security, and defence

Affected stakeholders
General public

Harm types
Human or fundamental rightsReputationalPublic interest

Severity
AI hazard

Business function:
Compliance and justice

AI system task:
Recognition/object detection


Articles about this incident or hazard

Thumbnail Image

No 'compelling need' shown for face scanning tech plan by Police Scotland, SNP told

2024-10-05
The Scotsman
Why's our monitor labelling this an incident or hazard?
Live facial recognition is an AI system under consideration, and although no deployment or harms have yet occurred in Scotland, its use would plausibly lead to violations of privacy, potential civil rights breaches, and misidentification harms (false positives/negatives). The discussion is about preventing future risks rather than reporting a realized incident, making this an AI Hazard.
Thumbnail Image

Mark Smith: We should all be worried about Scotland's use of facial recognition

2024-10-07
The Herald
Why's our monitor labelling this an incident or hazard?
Facial recognition technology is an AI system used by Police Scotland for retrospective and potentially live identification. The article details its active use and the harms arising from it, such as false alerts leading to wrongful arrests and racial bias disproportionately affecting Black individuals. These harms directly relate to violations of human rights and civil liberties, fulfilling the criteria for an AI Incident. The article also emphasizes the lack of public consent and debate, reinforcing the significance of these harms. Therefore, this event is best classified as an AI Incident due to the realized harms caused by the AI system's use in policing.
Thumbnail Image

Calls grow for investigation into Police Scotland's live facial recognition tactics | Biometric Update

2024-10-07
Biometric Update
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system (live facial recognition) by Police Scotland, which qualifies as an AI system under the definitions. However, the article centers on public and political calls for investigation and oversight due to concerns about privacy and civil liberties, rather than describing a specific incident where the AI system directly or indirectly caused harm. There is no report of realized harm or a near-miss event that would constitute an AI Incident or AI Hazard. The focus on policy questions, parliamentary inquiries, and advocacy for regulation fits the definition of Complementary Information, as it enhances understanding of the AI ecosystem and societal responses without reporting a new primary harm or imminent risk.
Thumbnail Image

Scottish Government faces questions over police AI facial recognition proposals

2024-10-05
STV News
Why's our monitor labelling this an incident or hazard?
The event involves the proposed use of an AI system (live facial recognition) by Police Scotland. The article highlights significant concerns about the system's accuracy and potential discriminatory effects, which could plausibly lead to violations of human rights and civil liberties if implemented. However, since the technology is still in the proposal stage and no direct harm has been reported, this situation represents a plausible future risk rather than an incident of realized harm. Therefore, it qualifies as an AI Hazard.
Thumbnail Image

Concerns Raised Over Police Scotland's Consideration of Live Facial Recognition Technology

2024-10-07
idtechwire.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (live facial recognition technology) being considered for use by Police Scotland, which fits the definition of an AI system. However, the article does not describe any actual deployment or malfunction of the system leading to harm. Instead, it outlines concerns about possible future harms such as misidentification, bias, and civil liberties violations, as well as the need for proper legal and public scrutiny. Since no harm has occurred yet but there is a plausible risk of harm if the technology is deployed without adequate safeguards, this situation qualifies as an AI Hazard. It is not an AI Incident because no harm has materialized, nor is it Complementary Information or Unrelated.
Thumbnail Image

Call for probe into Police Scotland plan for AI and facial recognition tech

2024-10-05
Express.co.uk
Why's our monitor labelling this an incident or hazard?
The event involves the proposed use of an AI system (live facial recognition) by Police Scotland, which could plausibly lead to harms such as violations of civil liberties, discrimination, and misidentification. The article details concerns about accuracy and ethical issues, referencing past false alerts and legal challenges elsewhere. However, no direct or indirect harm has yet occurred in this case, making it an AI Hazard rather than an AI Incident. The article is not merely complementary information because it focuses on the potential risks and calls for investigation rather than updates on past incidents or governance responses. Hence, the classification is AI Hazard.