Facial Recognition AI Misidentifies Woman, Leading to Wrongful Six-Month Incarceration

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Kimberlee Williams, an Oklahoma resident, was wrongfully arrested and jailed for six months after facial recognition AI misidentified her as a suspect in Maryland bank fraud cases. Authorities relied on the AI match without proper verification, resulting in multiple felony charges and significant harm to Williams' rights and freedom.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves the use of facial recognition AI technology that wrongly identified the woman as a suspect, which directly caused her to be arrested and jailed for six months on multiple felony charges she did not commit. The harm includes wrongful imprisonment and violation of legal and human rights. The police's failure to disclose the AI's role further compounds the issue. Therefore, this is a clear AI Incident as the AI system's malfunction led to direct harm to a person.[AI generated]
AI principles
AccountabilityRespect of human rights

Industries
Government, security, and defence

Affected stakeholders
General public

Harm types
Human or fundamental rightsPsychologicalReputational

Severity
AI incident

Business function:
Compliance and justice

AI system task:
Recognition/object detection


Articles about this incident or hazard

Thumbnail Image

US Woman Jailed for 6 Months After Facial Recognition Misidentification

2026-04-15
NDTV
Why's our monitor labelling this an incident or hazard?
The event involves the use of facial recognition AI technology that wrongly identified the woman as a suspect, which directly caused her to be arrested and jailed for six months on multiple felony charges she did not commit. The harm includes wrongful imprisonment and violation of legal and human rights. The police's failure to disclose the AI's role further compounds the issue. Therefore, this is a clear AI Incident as the AI system's malfunction led to direct harm to a person.
Thumbnail Image

'That wasn't me': How facial recognition led to a woman being jailed for 6 months

2026-04-14
NZ Herald
Why's our monitor labelling this an incident or hazard?
The event involves the use of facial recognition technology, an AI system, in law enforcement identification. The technology's erroneous output directly caused wrongful criminal charges and imprisonment, which is a clear harm to the individual's rights and health. The failure to disclose the AI's role and inadequate investigation further contributed to the harm. Therefore, this is an AI Incident as the AI system's use directly led to violations of rights and harm to a person.
Thumbnail Image

Facial recognition wrongly sends a woman to jail

2026-04-15
IOL
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system—facial recognition software—used in law enforcement identification. The AI system's output directly led to wrongful arrest, prolonged detention, and legal charges against an innocent person, causing harm to her health, rights, and community. The failure to disclose the AI's role and lack of verification exacerbated the harm. These factors meet the criteria for an AI Incident, as the AI system's use directly caused significant harm to a person and violated her rights.
Thumbnail Image

'That wasn't me': How facial recognition led to a woman being jailed for 6 months

2026-04-14
The Spokesman Review
Why's our monitor labelling this an incident or hazard?
Facial recognition technology is an AI system used here to identify a suspect. Its use directly led to wrongful criminal charges and imprisonment, constituting harm to a person (a). The police's failure to disclose the AI's role and to verify the identification further exacerbated the harm. The event meets the criteria for an AI Incident because the AI system's use directly caused significant harm to an individual through wrongful prosecution and incarceration.
Thumbnail Image

Facial Recognition Fail: An AI Match, 16 Charges, and Six Months in Jail for the Wrong Woman

2026-04-15
Gadget Review
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of facial recognition technology, an AI system, whose flawed matching directly caused wrongful arrest and incarceration, constituting harm to the individual's rights and liberty. The harm is realized and significant, meeting the criteria for an AI Incident. The article details the AI system's role in the investigative process and the resulting legal and personal consequences, confirming direct causation of harm. Hence, the classification as AI Incident is appropriate.
Thumbnail Image

Facial-recognition software sent her to jail for months. She was the wrong person

2026-04-15
Straight Arrow News
Why's our monitor labelling this an incident or hazard?
Facial-recognition software is an AI system that was used by police to identify a suspect. The AI system's erroneous match directly led to Williams' wrongful arrest and prolonged imprisonment, constituting harm to her rights and personal freedom. The police's reliance on the AI output without proper verification and disclosure further exacerbated the harm. This fits the definition of an AI Incident because the AI system's malfunction and misuse directly caused significant harm to a person, including violation of rights and wrongful detention.