Faulty AI Facial Recognition Leads to Wrongful Arrest in Detroit

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Detroit police wrongly arrested LaDonna Crutchfield after a facial recognition error misidentified her as a shooting suspect. Despite police claims contesting AI involvement in one instance, a lawsuit argues that reliance on such technology violated her rights and caused significant emotional distress.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event describes a wrongful arrest based on a false facial recognition match, which is an AI system used in law enforcement. The harm includes emotional distress and violation of rights due to misidentification. The AI system's malfunction or misuse directly led to this harm, fulfilling the criteria for an AI Incident. Despite police denial, the lawsuit and pattern of similar cases support the involvement of AI facial recognition causing harm.[AI generated]
AI principles
AccountabilityFairnessHuman wellbeingPrivacy & data governanceRespect of human rightsRobustness & digital securitySafetyTransparency & explainabilityDemocracy & human autonomy

Industries
Government, security, and defenceDigital security

Affected stakeholders
General public

Harm types
Human or fundamental rightsPsychologicalReputational

Severity
AI incident

Business function:
Compliance and justice

AI system task:
Recognition/object detection


Articles about this incident or hazard

Thumbnail Image

Detroit police falsely arrested woman after faulty facial recognition hit, lawsuit says

2025-02-25
The Detroit News
Why's our monitor labelling this an incident or hazard?
The event describes a wrongful arrest based on a false facial recognition match, which is an AI system used in law enforcement. The harm includes emotional distress and violation of rights due to misidentification. The AI system's malfunction or misuse directly led to this harm, fulfilling the criteria for an AI Incident. Despite police denial, the lawsuit and pattern of similar cases support the involvement of AI facial recognition causing harm.
Thumbnail Image

Detroit Police Allegedly Arrested the Wrong Woman Because She's 'Fat and Black'

2025-02-28
The Root
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of facial recognition technology, an AI system, which allegedly led to the wrongful arrest of LaDonna Crutchfield. This wrongful arrest constitutes a violation of her rights and personal liberty, fitting the definition of harm under human rights violations. The AI system's use in this context directly contributed to the harm, making this an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Detroit police wrongly arrested woman after facial recognition tech misidentified her as shooting culprit

2025-02-26
Reason.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly mentioned as facial recognition software used by police to identify suspects. The wrongful arrest and detention caused direct harm to the individual, including violation of constitutional rights and emotional distress. The harm is clearly linked to the AI system's erroneous identification and the police's reliance on it without further investigation. Therefore, this qualifies as an AI Incident due to direct harm and rights violation caused by the AI system's use.
Thumbnail Image

Detroit police sued over alleged facial recognition use in another wrongful arrest | Biometric Update

2025-02-26
Biometric Update | Biometrics News, Companies and Explainers
Why's our monitor labelling this an incident or hazard?
The event describes a wrongful arrest allegedly caused by a facial recognition system, an AI system, which directly led to harm to a person (wrongful arrest, violation of rights). Even though the police deny using facial recognition in this case, the lawsuit and claims focus on the AI system's role in the harm. The wrongful arrest and forced biometric collection are clear harms linked to AI system use or misuse. This fits the definition of an AI Incident because the AI system's use has directly led to harm. The article also discusses broader concerns about facial recognition harms, reinforcing the incident classification.
Thumbnail Image

'Why? Because I Am Fat and Black Like Her?': Detroit Woman Falsely Accused of Attempted Murder Through Facial Recognition Software Files Lawsuit Against Police

2025-02-27
Atlanta Black Star
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system—facial recognition software—used by law enforcement to identify suspects. The software's malfunction or inaccuracy directly led to the false arrest and detention of an innocent person, causing harm to her rights and personal freedom. The harm is realized, not just potential, and the AI system's role is pivotal in the incident. Therefore, this qualifies as an AI Incident under the OECD framework, specifically a violation of rights and harm to the individual due to AI misuse or malfunction.
Thumbnail Image

Facial recognition falsely IDs mom as attempted murder suspect in Michigan, lawsuit says

2025-02-27
Kansas City Star
Why's our monitor labelling this an incident or hazard?
The event describes a wrongful arrest and racial discrimination allegedly caused by faulty facial recognition technology, an AI system. The harm includes violation of rights and emotional distress, which are direct harms under the AI Incident definition. Although the police deny use of facial recognition, the lawsuit's claim and the nature of the harm justify classification as an AI Incident due to the direct or indirect role of the AI system in causing harm. The event is not merely a potential risk or complementary information but a reported harm involving AI.
Thumbnail Image

Woman sues, claiming Detroit police used facial recognition to mistakenly detain her for attempted murder

2025-02-28
Face2Face Africa
Why's our monitor labelling this an incident or hazard?
The event describes a wrongful arrest and detention directly linked to the use or alleged use of facial recognition technology, an AI system. The harm includes violation of personal liberty and rights, wrongful detention, and forced collection of DNA and fingerprints. The incident fits the definition of an AI Incident because the AI system's use (facial recognition) directly led to harm to a person. The denial by police does not negate the plaintiff's claim and the context of prior similar incidents involving facial recognition. Therefore, this is classified as an AI Incident.
Thumbnail Image

Detroit woman suing police, claiming faulty facial recognition technology led to unjust arrest

2025-02-28
NBC News
Why's our monitor labelling this an incident or hazard?
The event involves an AI system, specifically facial recognition technology, which was used by police to identify a suspect. The technology's malfunction or misapplication led to the wrongful arrest of LaDonna Crutchfield, causing emotional distress and reputational harm. This fits the definition of an AI Incident because the AI system's use directly led to harm to a person, including violation of rights and emotional injury. The lawsuit and the description of the event confirm that the AI system's role was pivotal in causing the harm.
Thumbnail Image

Detroit woman suing police, claiming faulty facial recognition technology led to unjust arrest

2025-02-28
Aol
Why's our monitor labelling this an incident or hazard?
The event involves the use of facial recognition technology, an AI system, in law enforcement identification processes. The wrongful arrest and associated emotional distress represent direct harm to the individual, fulfilling the criteria for an AI Incident. Despite police denial, the plaintiff's claim that the AI system led to misidentification and unjust arrest is central to the event. This aligns with the definition of an AI Incident where AI use has directly led to violations of rights and harm to a person.
Thumbnail Image

DPD denies use of facial recognition as woman sues department for false arrest

2025-02-26
WXYZ
Why's our monitor labelling this an incident or hazard?
The event describes a false arrest resulting from a mistaken identification linked to an image from a video and a database search, with allegations of facial recognition use. Facial recognition is an AI system. Although the police deny using facial recognition, the attorney and circumstances suggest it was likely used or at least AI technology was involved in the identification process. The false arrest and detention caused harm to the individual's rights and liberty, which fits the harm criteria for an AI Incident. The AI system's role is pivotal in the chain of events leading to harm, even if disputed, because the identification method (likely AI-based) directly led to the wrongful arrest. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.