Controversy Over Metropolitan Police Use of Live Facial Recognition at Notting Hill Carnival

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

The Metropolitan Police's use of live facial recognition (LFR) at the Notting Hill Carnival has sparked criticism from civil rights groups and London Assembly members, citing concerns over bias, misidentification, and human rights violations, particularly affecting women and minority communities. Despite police assurances of improved accuracy, critics argue the technology remains discriminatory and undermines public trust.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves an AI system (live facial recognition) used by law enforcement to identify individuals from live video feeds. The system's use has directly led to arrests, indicating realized impact. There are documented cases of false matches causing harm through wrongful suspicion or identification, which constitutes harm to individuals' rights and potentially to communities. The concerns about bias and privacy violations further indicate violations of human rights and legal obligations. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harms including wrongful identification and potential rights violations.[AI generated]
AI principles
FairnessPrivacy & data governanceRespect of human rightsTransparency & explainabilityAccountabilityRobustness & digital securityDemocracy & human autonomy

Industries
Government, security, and defence

Affected stakeholders
WomenGeneral public

Harm types
Human or fundamental rightsPsychologicalReputationalPublic interest

Severity
AI incident

Business function:
Compliance and justice

AI system task:
Recognition/object detection


Articles about this incident or hazard

Thumbnail Image

Met Police urged to drop facial scanning at Notting Hill Carnival

2025-08-17
BBC
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (live facial recognition) and discusses its intended use, which could plausibly lead to harms such as violations of rights or discriminatory impacts. However, no actual harm or incident has occurred yet, and the focus is on the potential risks and opposition to the planned use. Therefore, this qualifies as an AI Hazard, reflecting credible concerns about plausible future harm from the AI system's deployment.
Thumbnail Image

Met Police urged to drop facial scanning at Notting Hill Carnival

2025-08-17
BBC
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system (live facial recognition) by the Metropolitan Police and discusses concerns about its accuracy and bias, which are relevant to potential harms. The concerns include possible violations of rights and racial bias, which fall under harm categories (c) violations of human rights. Although there is mention of an ongoing judicial review related to wrongful identification, the article does not report a new or specific incident of harm occurring at the Notting Hill Carnival itself. The police assert the system's accuracy and its role in arrests, but no direct harm at this event is documented. The event thus represents a credible risk of harm due to the deployment of LFR technology in a sensitive context, making it an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because the AI system and its implications are central to the report.
Thumbnail Image

'Facial recognition can make mistakes, it's not a decision-maker'

2025-08-18
BBC
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses the use of an AI system (live facial recognition) in law enforcement, which qualifies as an AI system. However, it does not describe any direct or indirect harm caused by the system, nor does it indicate a plausible future harm event. Instead, it provides information about the deployment scale, operational procedures, and privacy measures, as well as public concerns. This fits the definition of Complementary Information, as it enhances understanding of the AI system's use and governance without reporting a new incident or hazard.
Thumbnail Image

'Facial recognition can make mistakes, it's not a decision-maker'

2025-08-18
BBC
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (live facial recognition) used by law enforcement to identify individuals from live video feeds. The system's use has directly led to arrests, indicating realized impact. There are documented cases of false matches causing harm through wrongful suspicion or identification, which constitutes harm to individuals' rights and potentially to communities. The concerns about bias and privacy violations further indicate violations of human rights and legal obligations. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harms including wrongful identification and potential rights violations.
Thumbnail Image

Met defends facial recognition plan for Notting Hill Carnival

2025-08-19
BBC
Why's our monitor labelling this an incident or hazard?
The article describes the planned use of an AI system (LFR) by the police and the associated societal concerns, but it does not report any actual harm or incident caused by the AI system. There is no indication that the use of LFR has directly or indirectly led to injury, rights violations, or other harms at the event. Therefore, this is not an AI Incident. However, the deployment of LFR in a public setting with contested implications for rights and inclusion could plausibly lead to harms such as rights violations or community harm if misused or malfunctioning. Since the article focuses on the defense of the technology ahead of its use and the concerns raised, it fits best as Complementary Information providing context on societal and governance responses to AI use in law enforcement.
Thumbnail Image

Met defends facial recognition plan for Notting Hill Carnival

2025-08-19
BBC
Why's our monitor labelling this an incident or hazard?
The use of LFR, an AI system for facial recognition, is explicitly mentioned. Its deployment at a large public event has directly led to arrests, which implies the AI system's outputs influenced law enforcement actions. The concerns raised about racial bias and wrongful identification indicate potential or actual violations of rights. Since the technology's use has already resulted in arrests and has caused controversy over accuracy and discrimination, this constitutes an AI Incident involving violations of rights and harm to communities. The article focuses on the ongoing use and defense of the system rather than just potential risks or policy responses, so it is not merely complementary information or a hazard.
Thumbnail Image

Shop facial recognition cameras are flagging 10,000 suspects a week

2025-08-19
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
Facial recognition technology is an AI system used here for real-time identification of individuals on watchlists. Its use by retailers and police has directly led to alerts that influence actions taken against suspects, impacting individuals' rights and raising concerns about mass surveillance and privacy violations. These effects constitute violations of human rights and fundamental rights, fitting the definition of an AI Incident. The article reports ongoing use and realized impacts rather than just potential risks or responses, so it is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Notting Hill face-recognition technology will be used without bias...

2025-08-19
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
The event involves the active use of an AI system (live facial recognition technology) by law enforcement and retail sectors. The technology's deployment has directly led to concerns about bias and discrimination, which are violations of human rights and can harm communities. The article reports that the technology has a history of inaccurate outcomes and racial bias, and that it treats all carnival-goers as potential suspects, which can cause harm to individuals' rights and social trust. The increase in suspect alerts in retail settings also suggests potential harm from false positives affecting people's lives. These factors meet the criteria for an AI Incident, as the AI system's use has directly led to harm or rights violations.
Thumbnail Image

Face-recognition tech will be used without bias at festival, Met boss says

2025-08-19
Sky News
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system (live facial recognition) by the police, which is relevant to AI system involvement. However, it does not describe any direct or indirect harm resulting from the AI system's use at the carnival, only concerns and criticisms about potential bias and legal issues. The article also reports on increased alerts from a retail facial recognition system without linking this to harm. Since no harm has been realized or a plausible immediate hazard demonstrated, and the main focus is on the deployment, improvements, and societal concerns, this fits the definition of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

U.K. rolls out mobile facial recognition vans - NaturalNews.com

2025-08-20
NaturalNews.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (live facial recognition technology) actively used by law enforcement. The use has directly led to harms such as misidentification of individuals, raising concerns about violations of fundamental rights including privacy and freedom of assembly. The deployment without clear legal safeguards and the risk of creating a total surveillance society further support the classification as an AI Incident. The presence of a legal challenge and documented cases of wrongful identification confirm that harm has occurred, not just potential harm.
Thumbnail Image

Notting Hill face-recognition technology will be used without bias, police say

2025-08-19
getwestlondon
Why's our monitor labelling this an incident or hazard?
The article involves AI systems explicitly (facial recognition technology) and discusses their use and societal implications. However, it does not describe any specific event where the AI system's use has directly or indirectly caused harm such as injury, rights violations, or community harm. The concerns about bias and surveillance are noted, but no concrete incident of harm is reported. The article mainly presents the debate, concerns, and responses around the technology's deployment, which fits the definition of Complementary Information as it provides context and updates on societal and governance responses to AI use rather than reporting a new AI Incident or AI Hazard.
Thumbnail Image

Notting Hill face-recognition technology will be used without bias - Met boss

2025-08-19
Kent Online
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (live facial recognition technology) in active deployment by the police and retail stores. The concerns about bias and inaccuracies, especially racial bias, indicate violations of rights and harm to communities. The technology's use in mass surveillance and suspect identification directly impacts individuals' rights and can cause harm through wrongful suspicion or discrimination. These harms are occurring or have occurred, qualifying this as an AI Incident. The article also highlights societal responses and concerns but the primary focus is on the realized harms from the AI system's use.
Thumbnail Image

Notting Hill face-recognition technology will be used without bias - Met boss

2025-08-19
The Irish News
Why's our monitor labelling this an incident or hazard?
The use of live facial recognition technology by the police at a major public event, with acknowledged issues of bias and concerns from civil rights groups, constitutes an AI Incident. The technology's deployment has directly led to concerns about violations of human rights and potential discriminatory harm to communities, fulfilling the criteria for harm under the framework. The article describes realized use and associated harms rather than potential future risks or mere updates, so it is not a hazard or complementary information.
Thumbnail Image

Calls for the Metropolitan Police to Abandon Facial Recognition at Carnival - The Global Herald

2025-08-17
The Global Herald
Why's our monitor labelling this an incident or hazard?
The article describes the planned use of an AI system (live facial recognition) by the police, which is known to have issues with accuracy and bias, particularly affecting minority ethnic groups and women. The concerns raised by civil rights groups and the ongoing judicial review of a misidentification case indicate a credible risk of harm, including violations of human rights and harm to communities. Since the event is upcoming and no new harm has yet been reported from this deployment, it fits the definition of an AI Hazard rather than an AI Incident. The AI system's use could plausibly lead to harm, but the harm has not yet materialized in this specific event.
Thumbnail Image

Notting Hill face-recognition technology will be used without bias - Met boss

2025-08-19
Basingstoke Gazette
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (live facial recognition technology) actively used by law enforcement to identify suspects. The concerns about racial bias and inaccuracies in the technology, especially in a culturally significant event, indicate that the AI system's use has led to or is causing harm related to violations of human rights and harm to communities. The article references the technology's history of inaccurate outcomes and racial bias, which are recognized harms under the framework. Therefore, this is classified as an AI Incident due to realized harm linked to the AI system's deployment and its societal impact.
Thumbnail Image

Assembly members criticise planned increase in police use of live facial recognition in London - The Fitzrovia News

2025-08-17
The Fitzrovia News
Why's our monitor labelling this an incident or hazard?
Live facial recognition technology is an AI system that maps and matches facial features against watchlists. Its use has directly led to harms including wrongful identification of individuals (a person wrongly flagged as a suspect), disproportionate targeting of minority communities, and concerns about erosion of civil liberties and democratic principles. These constitute violations of human rights and harm to communities. The article also mentions ongoing legal challenges and public criticism, but the primary focus is on the realized harms caused by the AI system's use in policing. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Met defend use of live facial recognition at Notting Hill Carnival

2025-08-19
thetimes.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (Live Facial Recognition) in an operational context that has directly led to arrests and charges, which are responses to criminal activities including violent crimes. The use of LFR has a direct link to harm prevention and law enforcement, thus involving the AI system's use leading to harm mitigation. However, concerns about bias and disproportionate impact on ethnic minorities indicate potential violations of rights or discriminatory harm. Since the technology's deployment has already resulted in apprehensions and charges, and there are ongoing debates about its fairness and impact on communities, this qualifies as an AI Incident due to the realized and ongoing harms related to rights and community impact stemming from the AI system's use.
Thumbnail Image

I now think police use of live facial recognition will make us safer - here's why you should think so too | Brian Paddick

2025-08-20
The Guardian
Why's our monitor labelling this an incident or hazard?
The article centers on the use and governance of an AI system (live facial recognition) by police, which qualifies as an AI system under the definitions. However, it does not describe any event where the AI system's use or malfunction has directly or indirectly caused harm (such as injury, rights violations, or community harm). Nor does it present a credible risk of future harm. Instead, it offers an opinion and contextual information about the technology's deployment and legal considerations. Therefore, it fits best as Complementary Information, providing context and discussion about AI use in policing without reporting an incident or hazard.
Thumbnail Image

Met Police urged to scrap facial recognition at Notting Hill Carnival over 'racial bias' fears

2025-08-20
AOL.com
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the deployment of an AI system (Live Facial Recognition) and the concerns about its racial bias and accuracy, which could plausibly lead to violations of rights and harm to communities if misused or if inaccuracies result in wrongful identification or discrimination. Since the technology is planned to be used and has been used previously with arrests, but no specific incident of harm or rights violation is reported here, the event fits the definition of an AI Hazard rather than an AI Incident. The concerns about racial bias and lack of clear legal basis highlight potential future harms, making this a credible AI Hazard.
Thumbnail Image

Real-time facial recognition meets real-world debate, but where's the data? | Biometric Update

2025-08-20
Biometric Update
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (live facial recognition) used by police, with concerns about bias, human rights compliance, and effectiveness. While there are worries about potential harms such as racial discrimination and abuse of authority, the article does not report any realized harm or incident directly caused by the AI system. Instead, it focuses on the debate, lack of evidence, and regulatory scrutiny, which aligns with the definition of Complementary Information as it provides context and updates on societal and governance responses to AI use without describing a specific AI Incident or AI Hazard.
Thumbnail Image

U.K. rolls out mobile facial recognition vans

2025-08-20
SGT Report
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system (live facial recognition) being used operationally by police forces, which fits the definition of an AI system. The concerns raised about intrusiveness and potential unlawfulness indicate plausible risks of harm, such as violations of human rights or privacy, but no direct or indirect harm is reported as having occurred. The deployment of these vans could plausibly lead to AI incidents if misuse or malfunction occurs. Therefore, the event is best classified as an AI Hazard, reflecting credible potential for harm without evidence of realized harm yet.
Thumbnail Image

Doubts cast over success of Met Police's Live Facial Recognition technology

2025-08-20
Southwark News
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (live facial recognition technology) used by police forces. The use of LFR has directly led to arrests, which is a form of law enforcement action impacting individuals' rights. There is also a reported wrongful identification case leading to a legal challenge, indicating harm to an individual's rights and potential harm to others. The concerns about disproportionate targeting and erosion of civil liberties relate to violations of human rights and fundamental rights. Therefore, the event qualifies as an AI Incident due to realized harms linked to the AI system's use in policing.
Thumbnail Image

Met Police's use of facial recognition tech 'breaches human rights'

2025-08-20
thetimes.com
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses the use of an AI system (live facial recognition technology) by the police, which has directly led to harm through wrongful identification and alleged breaches of human rights such as privacy, freedom of expression, and freedom of assembly. The wrongful identification of an individual as a criminal constitutes a direct harm caused by the AI system's malfunction or misuse. The concerns about racial bias and discrimination further support the classification as an AI Incident due to violations of fundamental rights. The ongoing legal challenge and public debate underscore the realized harm rather than just potential risk, distinguishing this from an AI Hazard or Complementary Information.
Thumbnail Image

More than 500 people were arrested at Notting Hill Carnival

2025-08-26
Yahoo
Why's our monitor labelling this an incident or hazard?
Live facial recognition is an AI system used here for real-time identification of individuals in a public setting. Its deployment led to the arrest of individuals involved in serious crimes, including a registered sex offender and a suspect accused of stabbing. This shows the AI system's use directly contributed to law enforcement outcomes addressing harm to persons and communities. The event describes realized harm (crime, violence) and the AI system's role in mitigating it, fitting the definition of an AI Incident. The article does not merely discuss potential or future harm, nor is it only about governance or complementary information. Hence, classification as AI Incident is appropriate.
Thumbnail Image

Notting Hill Carnival sees drop in serious violence despite Met Police making 528 arrests

2025-08-26
Yahoo
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of live facial recognition software, an AI system, which contributed to arrests and policing efforts. However, there is no indication that the AI system caused harm or malfunctioned leading to injury, rights violations, or other harms. The arrests and policing outcomes are described as positive in terms of reducing serious violence. The AI system's role is supportive and does not itself cause harm. Thus, the event fits the definition of Complementary Information, as it updates on the use and effectiveness of AI in policing without reporting new harm or plausible future harm.
Thumbnail Image

UK police deploy facial recognition at London's Notting Hill Carnival - Muvi TV

2025-08-26
Muvi Television Homepage - Latest Local News, Sports News, Business News & Entertainment
Why's our monitor labelling this an incident or hazard?
The facial recognition system is an AI system used in real-time to identify individuals in a crowd. Its use has directly led to arrests, which implies harm to individuals' rights and freedoms, fitting the definition of an AI Incident under violations of human rights or breach of legal protections. The concerns about racial bias and wrongful identification further support this classification. Therefore, this event is best classified as an AI Incident.
Thumbnail Image

UK police deploy facial recognition at London's Notting Hill Carnival

2025-08-26
Straight Arrow News
Why's our monitor labelling this an incident or hazard?
The facial recognition system is an AI system used in real-time to identify individuals in a large crowd. Its use has directly led to arrests, which is a form of harm related to human rights and civil liberties, especially given documented concerns about racial bias and wrongful identification. The article reports actual harm occurring (arrests based on AI identification), not just potential harm, making this an AI Incident. The concerns about bias and wrongful detention further support the classification as an incident involving violations of rights.
Thumbnail Image

Notting Hill Carnival police make more than 525 arrests

2025-08-27
EXPRESS
Why's our monitor labelling this an incident or hazard?
The article mentions live facial recognition, an AI system, used by police to identify suspects leading to arrests. However, there is no indication that the AI system caused harm, malfunctioned, or led to violations of rights or other harms. The arrests and violent incidents are criminal matters independent of AI malfunction or misuse. The AI system's role is supportive and no harm or plausible future harm from AI is described. Hence, this is Complementary Information about AI use in policing, not an Incident or Hazard.