UK Government Plans Expansion of AI Facial Recognition Amid Rights Concerns

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

The UK government and Ministry of Defence are seeking to expand AI-based facial recognition in policing and security, soliciting industry proposals for national deployment. This move has sparked criticism from privacy advocates and rights groups over risks of bias, mass surveillance, and potential human rights violations, though no direct harm has yet occurred.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly involves an AI system—facial recognition technology used by Facewatch. The concerns raised relate to privacy and human rights, which fall under violations of human rights as defined in the framework. Although no direct harm incident is reported, the article documents ongoing regulatory investigations and government lobbying that could affect the oversight and control of this AI system. This situation plausibly leads to AI-related harms if the technology is deployed without adequate safeguards, making it an AI Hazard. It is not Complementary Information because the main focus is not on responses or updates to a past incident but on the potential risks and regulatory influence. It is not an AI Incident because no actual harm or violation has been confirmed or reported as having occurred yet.[AI generated]
AI principles
AccountabilityFairnessPrivacy & data governanceRespect of human rightsRobustness & digital securityTransparency & explainabilityDemocracy & human autonomy

Industries
Government, security, and defenceDigital security

Affected stakeholders
General public

Harm types
Human or fundamental rightsPublic interest

Severity
AI hazard

Business function:
Compliance and justiceMonitoring and quality control

AI system task:
Recognition/object detection


Articles about this incident or hazard

Thumbnail Image

Home Office accused of secret lobbying for facial recognition 'spy' company

2023-09-02
The Guardian
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system—facial recognition technology used by Facewatch. The concerns raised relate to privacy and human rights, which fall under violations of human rights as defined in the framework. Although no direct harm incident is reported, the article documents ongoing regulatory investigations and government lobbying that could affect the oversight and control of this AI system. This situation plausibly leads to AI-related harms if the technology is deployed without adequate safeguards, making it an AI Hazard. It is not Complementary Information because the main focus is not on responses or updates to a past incident but on the potential risks and regulatory influence. It is not an AI Incident because no actual harm or violation has been confirmed or reported as having occurred yet.
Thumbnail Image

Government looking to expand use of facial recognition technology

2023-08-31
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
Facial recognition technology is an AI system used for identification and surveillance. The government's active plans to expand its use in law enforcement and security agencies indicate development and intended use of AI systems. Although the article does not report any realized harm yet, credible concerns from campaigners and civil rights groups highlight potential violations of privacy and human rights, which are recognized harms under the framework. The potential for Orwellian mass surveillance and disproportionate impact on certain communities supports the plausible risk of harm. Hence, this is best classified as an AI Hazard rather than an Incident or Complementary Information, as no harm has yet materialized, and it is not unrelated news.
Thumbnail Image

UK government seeks expanded use of AI-based facial recognition by police

2023-08-30
Financial Times News
Why's our monitor labelling this an incident or hazard?
The event involves the use and planned expansion of AI facial recognition systems by law enforcement, which are AI systems as they process biometric data to identify individuals. However, the article does not report any realized harm or incident resulting from these systems; rather, it discusses potential risks, legal concerns, and calls for regulation. Therefore, this situation represents a plausible risk of harm (e.g., privacy violations, bias, rights infringements) that could arise from the deployment of these AI systems. Hence, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because it clearly involves AI systems and their societal implications.
Thumbnail Image

Surveillance Tsar Urges Caution as Home Office Seeks to Expand Facial Recognition Cameras Across UK

2023-08-31
www.theepochtimes.com
Why's our monitor labelling this an incident or hazard?
The article centers on the planned wider deployment of facial recognition AI systems by the Home Office and the associated societal and legal concerns. It references past incidents and legal rulings as context but does not report a new incident of harm. The potential for future harm through privacy violations, bias, and misuse is clearly present, making this an AI Hazard. The article also includes commentary from the surveillance watchdog and civil liberty groups emphasizing the need for caution and accountability. Therefore, the event is best classified as an AI Hazard due to the plausible future risks posed by the planned expansion of facial recognition technology.
Thumbnail Image

IBM promised to back off facial recognition -- then it signed a $69.8 million contract to provide it

2023-08-31
The Verge
Why's our monitor labelling this an incident or hazard?
Facial recognition technology is an AI system that has been widely criticized for enabling racial profiling, mass surveillance, and violations of privacy and human rights. IBM's prior withdrawal was based on these concerns, and its return to the market with a significant contract suggests the use of AI systems that have a direct or indirect role in human rights violations. Given the history and the nature of the technology, this event constitutes an AI Incident due to the realized or ongoing harm linked to the deployment of facial recognition AI.
Thumbnail Image

Government looking to expand use of facial recognition technology

2023-08-31
The Independent
Why's our monitor labelling this an incident or hazard?
The event involves the use and development of AI systems (facial recognition technology) by government agencies for law enforcement and security. Although the article does not report any realized harm, it clearly outlines the potential for significant harms including privacy violations, mass surveillance, and discriminatory impacts, which are recognized by civil rights groups and campaigners. The government's active pursuit to expand and enhance these AI systems within a short timeframe establishes a credible risk of future harm. Hence, this is best classified as an AI Hazard, reflecting plausible future harm from the AI system's deployment.
Thumbnail Image

Facial recognition technology labelled 'Orwellian' as government eyes wider use by police and security agencies

2023-08-30
Sky News
Why's our monitor labelling this an incident or hazard?
Facial recognition technology is an AI system used for identifying individuals from images or video. The government's active pursuit to expand its use by police and security agencies indicates ongoing development and deployment plans. Although no new incident of harm is reported, the article references past unlawful use and widespread concerns about privacy and rights violations, which are credible risks associated with this technology. The potential for mass surveillance and discriminatory impacts aligns with plausible future harms under the AI Hazard definition. Since no new realized harm is described, and the focus is on potential expansion and associated risks, the classification as AI Hazard is appropriate.
Thumbnail Image

IBM vowed to dial back facial recognition tech, but recently landed $70 million contract to develop it

2023-08-31
TechSpot
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system—facial recognition technology—being developed and deployed for law enforcement and immigration identification. The use of facial recognition in these contexts has been associated with human rights violations such as racial profiling and privacy infringements. IBM's prior commitment to cease general-purpose facial recognition contrasts with this new contract, indicating a failure to fully comply with ethical and legal frameworks protecting human rights. The involvement of the AI system in potentially violating rights and the controversy around its use in sensitive areas meets the criteria for an AI Incident, as the harm (violation of human rights) is directly linked to the AI system's use.
Thumbnail Image

IBM vowed to dial back facial recognition tech, but recently...

2023-08-31
TechSpot
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system—facial recognition technology integrated into a biometric platform. The use of this AI system by law enforcement and immigration agencies directly relates to potential violations of human rights, including privacy and risks of racial profiling. Although IBM asserts compliance with its 2020 stance against mass surveillance, the deployment of facial recognition for strategic face matching in law enforcement contexts is widely criticized as harmful and incompatible with human rights. Given that the system is actively being developed and contracted for use, and that human rights concerns are raised, this constitutes an AI Incident due to the direct or indirect harm to human rights and potential breaches of legal protections. The event is not merely a future risk but an ongoing deployment with associated harms and controversies.
Thumbnail Image

Big Brother Is Watching: UK police to increase use of AI facial recognition despite inaccuracies

2023-09-01
Firstpost
Why's our monitor labelling this an incident or hazard?
The event involves the use and planned increased deployment of an AI system (facial recognition) that has a history of causing harm through misidentification and bias, which can violate human rights and harm communities. Although past incidents of harm are referenced, the article mainly reports on the government's plans and the surrounding controversy, without describing a new concrete incident of harm occurring now. Therefore, this is best classified as an AI Hazard, since the increased use of this AI system could plausibly lead to further incidents of harm, including rights violations and misidentifications, especially given the lack of sufficient legislation and ongoing concerns.
Thumbnail Image

Government looking to expand use of facial recognition technology

2023-08-31
Evening Standard
Why's our monitor labelling this an incident or hazard?
Facial recognition technology is an AI system that processes biometric data to identify individuals. The government's active efforts to expand its use in policing and security imply increased deployment and reliance on AI systems that could lead to violations of privacy and human rights, constituting harm to communities and individuals. Although no direct harm is reported yet, credible concerns and opposition from civil rights groups indicate a plausible risk of AI-related harms such as mass surveillance and discrimination. Hence, this situation fits the definition of an AI Hazard, as the development and use of these AI systems could plausibly lead to an AI Incident in the future if not properly governed.
Thumbnail Image

UK Government's 'disturbing' plans to expand facial recognition

2023-09-01
InYourArea.co.uk
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses the government's plans to expand the use of facial recognition AI systems in policing and security, which involves the development and use of AI systems. Although no direct harm has yet been reported, the credible concerns about privacy invasion, mass surveillance, and disproportionate impact on certain communities indicate plausible future harms. The event does not describe an actual incident of harm but rather a planned expansion that could lead to such harms. Hence, it qualifies as an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because it clearly involves AI systems and their potential societal impact.
Thumbnail Image

IBM facial recognition contract provokes debate on specifics, and 'general purpose' fear | Biometric Update

2023-09-01
Biometric Update
Why's our monitor labelling this an incident or hazard?
The article discusses the deployment of an AI system (facial recognition) with potential for human rights violations, specifically mass surveillance and racial profiling, which are recognized harms under the framework. However, it does not describe any actual harm or incident that has occurred as a result of this contract or the use of the technology. The concerns and accusations are about possible or potential misuse and the implications of IBM's policy changes. Therefore, this event fits the definition of an AI Hazard, as the development and use of this AI system could plausibly lead to an AI Incident involving human rights violations, but no direct or indirect harm has been reported yet.
Thumbnail Image

Government looking to expand use of facial recognition technology

2023-08-31
Kent Online
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (facial recognition technology) and discusses their planned expanded use by government agencies. Although no direct harm has yet been reported, the credible concerns about Orwellian mass surveillance, privacy invasion, and disproportionate impact on certain communities indicate a plausible risk of harm. The event is about the development and intended use of AI systems that could lead to violations of fundamental rights, fitting the definition of an AI Hazard. It is not an AI Incident because no realized harm is described, nor is it Complementary Information or Unrelated, as the focus is on the potential risks and expansion of AI use in a sensitive domain.
Thumbnail Image

Government looking to expand use of facial recognition technology

2023-08-31
Shropshire Star
Why's our monitor labelling this an incident or hazard?
Facial recognition technology is an AI system used for identification and surveillance. The government's call for expanding its use in policing and security agencies indicates development and intended use of AI systems. Although no direct harm has been reported yet, the article outlines significant concerns about privacy invasion, potential bias, and mass surveillance, which are recognized as violations of human rights and harm to communities. These concerns, supported by civil rights groups and campaigners, establish a credible risk that the expanded deployment could lead to AI Incidents in the future. Since no actual harm has occurred yet, the event fits the definition of an AI Hazard rather than an AI Incident.
Thumbnail Image

Government looking to expand use of facial recognition technology

2023-08-31
Guernsey Press
Why's our monitor labelling this an incident or hazard?
Facial recognition technology is an AI system used for identification and surveillance. The article discusses plans to expand its use, which could plausibly lead to harms such as privacy violations, mass surveillance, and disproportionate targeting of communities, all of which fall under violations of human rights and harm to communities. Since no actual harm has yet occurred but the expansion is planned and actively pursued, this is a credible potential risk. Hence, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

UK Government Seeks Expanded Use of AI-based Facial ... - Slashdot

2023-09-01
Business Telegraph
Why's our monitor labelling this an incident or hazard?
The event involves the use and planned expansion of AI facial recognition systems by government agencies, which are AI systems as they process biometric data to identify individuals. The concerns about bias and inaccuracies indicate potential for harm, particularly violations of rights and harm to communities. Since the expansion is proposed and no direct harm is reported yet, this constitutes a plausible risk of harm in the near future, fitting the definition of an AI Hazard rather than an AI Incident. The event is not merely general AI news or a response update, but a credible potential source of harm due to the nature of the technology and its intended use.