Detroit Police Facial Recognition Misidentifications Lead to Lawsuits and Policy Changes

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Detroit police's use of facial recognition technology resulted in three cases of misidentification and wrongful arrests, prompting lawsuits and a significant reduction in the technology's use. Policy changes and a 2024 settlement have led to stricter governance and a 91% drop in searches since 2023.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article describes the use of an AI system (facial recognition) in a real-world setting with potential privacy and surveillance harms. However, no direct or indirect harm has been reported as having occurred. The concerns raised about data security, consent, and surveillance normalization constitute plausible risks that could lead to harm in the future. Therefore, this event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because the AI system is central to the event and its potential impacts.[AI generated]
AI principles
FairnessRespect of human rights

Industries
Government, security, and defence

Affected stakeholders
General public

Harm types
Human or fundamental rightsPsychologicalReputational

Severity
AI hazard

Business function:
Compliance and justice

AI system task:
Recognition/object detection


Articles about this incident or hazard

Thumbnail Image

Fans erupt as Disneyland rolls out facial recognition technology across park entrances

2026-04-28
New York Post
Why's our monitor labelling this an incident or hazard?
The article describes the use of an AI system (facial recognition) in a real-world setting with potential privacy and surveillance harms. However, no direct or indirect harm has been reported as having occurred. The concerns raised about data security, consent, and surveillance normalization constitute plausible risks that could lead to harm in the future. Therefore, this event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because the AI system is central to the event and its potential impacts.
Thumbnail Image

Disneyland implements facial recognition to keep the lines moving, but guests say they didn't know it was optional | Fortune

2026-04-28
Fortune
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (facial recognition technology) used in a real-world setting. The system's use is described, including data processing and privacy measures. However, no direct or indirect harm has been reported as having occurred. The concerns raised by guests about lack of clear consent and unease with biometric data use indicate plausible future harm, such as privacy violations or unauthorized data use, which are violations of human rights. Since the harm is potential and not realized, this fits the definition of an AI Hazard. The article also discusses legal frameworks like the CCPA, reinforcing the privacy and rights context. Thus, the classification as AI Hazard is appropriate.
Thumbnail Image

Disneyland rolls out facial recognition at park entrances. Here's how it works

2026-04-28
Los Angeles Times
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of facial recognition technology (an AI system) for biometric identification at Disneyland entrances. However, it does not report any realized harm such as injury, rights violations, or data breaches resulting from this deployment. The concerns expressed relate to plausible future harms, such as privacy violations or data misuse, but these remain speculative. Therefore, the event fits the definition of an AI Hazard, as the use of facial recognition technology could plausibly lead to harms, especially regarding privacy and data security, but no incident has yet occurred.
Thumbnail Image

Disneyland Starts Using Facial Recognition Technology on Guests

2026-04-28
The Daily Beast
Why's our monitor labelling this an incident or hazard?
Facial recognition technology is an AI system used here for biometric screening. The article does not report any actual harm or incident resulting from its use, but it does raise credible concerns about potential misuse and data security risks that could plausibly lead to harm in the future. Therefore, this event fits the definition of an AI Hazard, as the deployment of this AI system could plausibly lead to violations of privacy rights or other harms, even though no incident has yet occurred.
Thumbnail Image

Nicole Kidman's Daughter Sends Brutal Message to Her Dad Keith Urban

2026-04-28
The Daily Beast
Why's our monitor labelling this an incident or hazard?
The event involves the deployment of an AI system (facial recognition) for biometric screening. Although no actual harm has been reported, the concerns raised by privacy advocates about surveillance and data security represent plausible future harms related to human rights and privacy. Therefore, this qualifies as an AI Hazard rather than an Incident, as the potential for harm exists but has not yet materialized.
Thumbnail Image

A whole new world: Disneyland adds facial recognition to some entrance lanes - AOL

2026-04-29
AOL.com
Why's our monitor labelling this an incident or hazard?
Facial recognition technology is an AI system involving biometric data processing. Its deployment at Disneyland for entrance management is a use of AI. However, the article does not describe any realized harm such as privacy breaches, wrongful identification, or discrimination. Instead, it highlights potential privacy concerns and the possibility of future misuse or security failures. This fits the definition of an AI Hazard, as the technology's use could plausibly lead to incidents involving privacy violations or other harms, but no such incident has occurred or been reported in this context yet.
Thumbnail Image

Disneyland is now scanning your face at nearly every gate, sparking privacy concerns

2026-04-28
The Spokesman Review
Why's our monitor labelling this an incident or hazard?
Facial recognition technology is an AI system used here for biometric identification. The article highlights privacy concerns and the potential for misuse or data breaches, which could lead to violations of privacy rights or other harms. No actual harm or incident is reported, only concerns and potential risks. Thus, the event fits the definition of an AI Hazard, as the AI system's use could plausibly lead to harm, but no direct or indirect harm has yet occurred.
Thumbnail Image

75 Groups Declare War on Meta's Plan to Turn Ray-Bans Into Portable Facial Recognition Weapons

2026-04-28
Gadget Review
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system—facial recognition technology integrated into smart glasses—that could directly lead to violations of human rights and privacy (harm category c). The article details the potential for non-consensual biometric data collection and surveillance, which constitutes a significant, clearly articulated harm. Since the technology is planned but not yet deployed, and the harms are potential but credible and serious, this qualifies as an AI Hazard. The involvement of AI in facial recognition and the plausible future harm to privacy and rights justify this classification.
Thumbnail Image

Disney is using facial recognition to confirm tickets and stop fraud at one of its parks

2026-04-28
Yahoo News UK
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (facial recognition) in a commercial setting, which is explicitly described. However, the article does not report any realized harm such as injury, rights violations, or data breaches occurring due to the system's deployment. Instead, it focuses on the rollout, user reactions, and expert concerns about potential privacy and security risks. Therefore, this qualifies as an AI Hazard because the technology's use could plausibly lead to harms like privacy violations or data breaches, but no actual incident has occurred yet.
Thumbnail Image

A whole new world: Disneyland adds facial recognition to some entrance lanes

2026-04-29
Yahoo
Why's our monitor labelling this an incident or hazard?
Facial recognition technology is an AI system involved in the event. Its use at Disneyland is active, but the article does not report any direct or indirect harm resulting from its deployment. The concerns are about privacy and surveillance, which are potential risks but not realized harms in this context. Therefore, the event does not qualify as an AI Incident. It also does not primarily focus on warnings or credible risks of future harm beyond general privacy concerns, so it is not an AI Hazard. The article mainly provides information about the deployment and societal context of the technology, including privacy debates and company measures, which fits the definition of Complementary Information.
Thumbnail Image

Disneyland guests can opt out of facial recognition at entry

2026-04-27
Blooloop
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system (facial recognition combined with biometric technology) and its use in a real-world setting. However, there is no indication that the system has caused any injury, rights violations, or other harms. The presence of an opt-out option and stated security measures further reduce the likelihood of harm. Since no harm has occurred and no plausible future harm is explicitly indicated, this event is best classified as Complementary Information, providing context and details about the AI system's deployment and privacy considerations.
Thumbnail Image

Tighter policies lead to fewer facial recognition searches for Detroit police

2026-04-28
Biometric Update
Why's our monitor labelling this an incident or hazard?
Facial recognition technology is an AI system used by Detroit police. The article details three cases of misidentification leading to wrongful arrests and lawsuits, which constitute harm to individuals and violations of rights. This meets the criteria for an AI Incident because the AI system's use directly led to harm. The article also discusses governance and policy changes as responses, but the primary focus is on the harms caused by the AI system's use. The school use of facial recognition is described without reported harm, so it does not change the classification. Thus, the event is classified as an AI Incident.
Thumbnail Image

Disneyland rolls out facial recognition at US park's entrances

2026-04-28
dpa International
Why's our monitor labelling this an incident or hazard?
Facial recognition technology is an AI system explicitly mentioned as being used at Disneyland entrances. The article discusses its use (not development or malfunction) and the potential privacy and surveillance harms that could arise, including data breaches and misuse. However, no direct or indirect harm has been reported as having occurred so far. The concerns expressed are about possible future harms and normalization of surveillance, which fits the definition of an AI Hazard rather than an AI Incident. The article does not focus on responses or updates to prior incidents, so it is not Complementary Information. It is not unrelated because it clearly involves AI systems and their societal implications.
Thumbnail Image

Outrage as Disneyland launches 'dystopian' tech at park entrances

2026-04-28
Mail Online
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system (facial recognition) in operation, which qualifies as an AI system. However, no actual harm or incident resulting from the AI system is described. The concerns raised are about privacy and potential misuse, but no direct or indirect harm has occurred or is reported. The company has implemented safeguards and participation is voluntary, further reducing immediate risk. The article mainly discusses public opinion and the introduction of the technology, which fits the definition of Complementary Information rather than an Incident or Hazard.