Facial Recognition at Beyoncé Concerts Sparks Human Rights Concerns

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

South Wales Police used AI-powered facial recognition to scan Beyoncé concert crowds for known paedophiles and terrorists, comparing faces to police watchlists. Human rights campaigners criticized the practice, citing privacy and rights violations, as images of flagged individuals were retained, raising concerns about surveillance and potential discrimination.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves the use of an AI system (facial recognition technology) in a law enforcement context. The technology's use directly affects individuals' rights and privacy, which falls under violations of human rights or breaches of obligations intended to protect fundamental rights. Even though no arrests or positive identifications occurred at these specific events, the deployment of this technology constitutes an AI Incident because it involves direct use of AI systems that impact rights and freedoms. The concerns about bias and discrimination further support classification as an AI Incident due to potential harm to communities and individuals. The article also references prior use and ongoing debates, but the primary focus is on the actual deployment and its implications, not just complementary information or future hazards.[AI generated]
AI principles
Privacy & data governanceRespect of human rightsFairnessTransparency & explainabilityAccountabilityDemocracy & human autonomy

Industries
Government, security, and defence

Affected stakeholders
General public

Harm types
Human or fundamental rightsReputationalPublic interestPsychological

Severity
AI incident

Business function:
Compliance and justice

AI system task:
Recognition/object detection


Articles about this incident or hazard

Thumbnail Image

Police Scanned Beyoncé Concert for Pedophiles, Terrorists

2023-11-10
Yahoo News
Why's our monitor labelling this an incident or hazard?
Facial recognition software is an AI system used here for surveillance. The police used it to scan attendees, which is a use of AI. However, the article does not report any actual harm, wrongful identification, or arrests resulting from this use. The concerns raised are about potential privacy violations, false positives, and discrimination risks, but these are not documented as having occurred in this event. Therefore, this event represents a plausible risk scenario rather than a realized harm. It fits the definition of an AI Hazard because the use of AI could plausibly lead to harms such as privacy violations or wrongful accusations, even though no such harm is reported here. It is not Complementary Information because the main focus is on the deployment and its risks, not on updates or responses to prior incidents. It is not Unrelated because AI is clearly involved.
Thumbnail Image

Beyoncé crowd scanned for potential paedophiles at UK gig

2023-11-08
LADbible
Why's our monitor labelling this an incident or hazard?
Facial recognition technology is an AI system used here for scanning crowds at a concert. The article does not describe any actual harm occurring (e.g., wrongful identification or arrest), but highlights concerns about bias and discrimination, which are credible risks. The police's use of watchlists and retention policies is described, but no incident of harm is reported. Hence, the event represents a plausible risk of harm from AI use rather than a realized harm, fitting the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Police scanned crowd for paedophiles at Beyoncé and Harry Styles gigs

2023-11-10
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (facial recognition technology) in a law enforcement context. The technology's use directly affects individuals' rights and privacy, which falls under violations of human rights or breaches of obligations intended to protect fundamental rights. Even though no arrests or positive identifications occurred at these specific events, the deployment of this technology constitutes an AI Incident because it involves direct use of AI systems that impact rights and freedoms. The concerns about bias and discrimination further support classification as an AI Incident due to potential harm to communities and individuals. The article also references prior use and ongoing debates, but the primary focus is on the actual deployment and its implications, not just complementary information or future hazards.
Thumbnail Image

Beyonce concert audience scanned for paedophiles using facial recognition technology

2023-11-09
The Telegraph
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (live facial recognition technology using AI for face comparison) being used in policing. However, the article states that no positive matches or arrests were made, so no direct harm or violation has occurred. The concerns raised by human rights groups relate to potential or systemic harms such as privacy violations and discrimination, but these are not reported as realized harms in this specific event. Therefore, this is best classified as Complementary Information, as it provides context on the use and societal response to AI facial recognition technology in policing, without describing a specific AI Incident or AI Hazard.
Thumbnail Image

Beyoncé's Cardiff concert's facial scan security slammed by human rights campaigners

2023-11-08
The News International
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system (live facial recognition) in a security context scanning concert attendees. The use of this AI system has led to criticism from human rights campaigners, indicating concerns about violations of fundamental rights. The scanning and retention of images of individuals identified on watchlists can be considered a breach of privacy and human rights. Although no specific harm such as wrongful detention is reported, the direct use of AI in a way that has caused rights concerns and public criticism qualifies this as an AI Incident under the framework, specifically under violations of human rights or breach of obligations intended to protect fundamental rights.
Thumbnail Image

Beyoncé concert audience scanned for paedophiles with use of facial recognition technology

2023-11-10
Irish Independent
Why's our monitor labelling this an incident or hazard?
Facial recognition technology is an AI system used here for real-time identification of individuals in a crowd. The police's use of this technology to scan for sex offenders and terrorists at a public event constitutes a direct use of AI with implications for human rights, including privacy and potential wrongful targeting. Since the technology is actively used and affects individuals' rights, this qualifies as an AI Incident under violations of human rights or breach of legal protections.
Thumbnail Image

Beyoncé's Cardiff concert's facial scan security slammed

2023-11-10
The Nation
Why's our monitor labelling this an incident or hazard?
Facial recognition is an AI system that processes biometric data to identify individuals. Its deployment at the concert for security screening directly involves the use of AI. The criticism by human rights campaigners highlights concerns about potential violations of fundamental rights, including privacy and possibly unlawful surveillance. The retention and use of biometric data, even if limited to flagged individuals, implicates human rights issues. Since the AI system's use has led to these concerns and potential harm to rights, this qualifies as an AI Incident under the framework's definition of violations of human rights or breach of obligations under applicable law.
Thumbnail Image

Facial recognition cameras used to scan for paedophiles in crowd at Beyoncé's UK concert | Newshub

2023-11-08
Newshub
Why's our monitor labelling this an incident or hazard?
Facial recognition is an AI system used here for live identification of individuals in a crowd. The deployment is intended to prevent crime and protect vulnerable groups, but the concerns about misuse, discrimination, and privacy violations indicate potential for harm. Since no actual harm or rights violations are reported as having occurred, this event fits the definition of an AI Hazard, where the AI system's use could plausibly lead to harm such as violations of human rights or discriminatory policing practices.
Thumbnail Image

Beyonce crowd scanned for potential paedophiles at UK gig

2023-11-09
JOE.co.uk
Why's our monitor labelling this an incident or hazard?
Facial recognition technology is an AI system used here for scanning crowds to identify individuals on watchlists. While there are concerns about human rights violations and discrimination, the article does not describe any realized harm or incident caused by the AI system's use. The event focuses on the deployment and the debate around it, not on a specific incident of harm. Therefore, it is best classified as Complementary Information, as it provides context and societal/governance responses to AI use in public safety without reporting a concrete AI Incident or a plausible AI Hazard.
Thumbnail Image

Beyonce concert audience scanned for paedophiles using facial recognition technology

2023-11-09
Yahoo News
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (live facial recognition) used by police to scan for known offenders, which fits the definition of an AI system. The use of this system at large public events could plausibly lead to harms such as violations of privacy and discrimination, which are recognized human rights concerns. Since no actual harm or incident (e.g., wrongful arrest or injury) has been reported, the event does not meet the threshold for an AI Incident. The concerns raised by human rights groups highlight potential future harms, making this an AI Hazard. The article does not primarily focus on responses or updates to previous incidents, so it is not Complementary Information. Therefore, the appropriate classification is AI Hazard.