Facial Recognition Error Leads to Wrongful Arrest and Jailing of Innocent Black Man

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Police in Louisiana used facial recognition AI to wrongly identify and arrest Randal Reid, a Black man from Georgia, for a theft he did not commit. Reid was jailed for nearly a week despite never having been to Louisiana, highlighting the technology's flaws and potential for racial bias.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves the use of an AI system (facial recognition software) in policing that directly led to harm: the wrongful arrest and detention of an individual. This constitutes a violation of human rights and a breach of legal protections. The AI system's malfunction or misidentification was a pivotal factor in causing this harm, meeting the criteria for an AI Incident.[AI generated]
AI principles
AccountabilityFairnessPrivacy & data governanceRespect of human rightsRobustness & digital securitySafetyTransparency & explainabilityDemocracy & human autonomy

Industries
Government, security, and defence

Affected stakeholders
General public

Harm types
Human or fundamental rightsPsychologicalReputationalEconomic/PropertyPublic interest

Severity
AI incident

Business function:
Compliance and justice

AI system task:
Recognition/object detection

In other databases


Articles about this incident or hazard

Thumbnail Image

Ayo TeKKKnology: Black Man Falsely Arrested For Theft Of $10,000 Of Louis Vuitton And Chanel Bags Based On Janky Facial Recognition Software - Bossip

2023-01-05
Business Telegraph
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (facial recognition software) in policing that directly led to harm: the wrongful arrest and detention of an individual. This constitutes a violation of human rights and a breach of legal protections. The AI system's malfunction or misidentification was a pivotal factor in causing this harm, meeting the criteria for an AI Incident.
Thumbnail Image

Facial recognition tool led to mistaken arrest, lawyer says

2023-01-03
pantagraph.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of facial recognition technology by authorities, which is an AI system. The mistaken identification and subsequent wrongful arrest of Randall Reid is a direct harm caused by the AI system's malfunction or error. The harm includes violation of individual rights and racial bias, which fits the definition of an AI Incident under violations of human rights and harm to individuals. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Facial recognition tool led to mistaken arrest, lawyer says - KION546

2023-01-02
KION546
Why's our monitor labelling this an incident or hazard?
Facial recognition technology is an AI system used for identification. The mistaken arrest caused by false positive identification directly harmed the individual by wrongful detention, which is a violation of fundamental rights. The article explicitly links the harm to the use of the AI system, fulfilling the criteria for an AI Incident.
Thumbnail Image

JPSO used facial recognition technology to arrest a man. The tech was wrong.

2023-01-02
NOLA
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the use of facial recognition AI technology by police to identify suspects. The technology's erroneous output caused the wrongful arrest and detention of an innocent person, which is a clear harm to the individual's rights and well-being. The AI system's malfunction was a direct contributing factor to this harm. Hence, this event meets the criteria for an AI Incident due to realized harm caused by the AI system's use and malfunction.
Thumbnail Image

Georgia Resident Wrongly Arrested For Nearly A Week Due To A False Match From A Facial Recognition Tool

2023-01-03
Yahoo News
Why's our monitor labelling this an incident or hazard?
The facial recognition tool is an AI system used by law enforcement to identify suspects. Its false positive match directly led to the wrongful arrest and detention of Randal Reid, causing harm to his personal freedom and emotional health. The police's reliance on the AI system without proper verification demonstrates the AI system's role in the incident. Therefore, this qualifies as an AI Incident due to realized harm caused by the AI system's malfunction and misuse.
Thumbnail Image

Facial recognition tool led to mistaken arrest, lawyer says

2023-01-02
MSN International Edition
Why's our monitor labelling this an incident or hazard?
Facial recognition technology is an AI system used here for law enforcement identification. The mistaken arrest directly resulted from the AI system's erroneous output linking the wrong individual to a crime. This caused harm to the individual's liberty and rights, fulfilling the criteria for an AI Incident. The article explicitly states the harm occurred and discusses the racial bias issue, reinforcing the classification as an AI Incident.
Thumbnail Image

Facial recognition technology blamed for mistaken arrest

2023-01-03
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
Facial recognition technology is an AI system used here for law enforcement identification. The mistaken arrest of Randall Reid was directly caused by the AI system's misidentification, which is a malfunction leading to harm to a person. The article also highlights racial disparities in misidentification rates, indicating a violation of rights and harm to communities. Therefore, this event meets the criteria for an AI Incident due to direct harm caused by the AI system's use and malfunction.
Thumbnail Image

Facial recognition tool led to mistaken arrest, lawyer says

2023-01-02
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (facial recognition technology) whose use directly led to harm—specifically, the wrongful arrest and detention of an individual, which constitutes injury or harm to a person. The misidentification is linked to known biases in facial recognition systems affecting people of color, fulfilling the criteria for an AI Incident due to violation of rights and harm to the individual. The article details the actual harm caused, not just potential harm, so it is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Innocent Black Man Jailed After Facial Recognition Went Wrong: Lawyer

2023-01-03
Gizmodo
Why's our monitor labelling this an incident or hazard?
The article explicitly involves a facial recognition AI system used by law enforcement to identify suspects. The AI system's erroneous output directly led to the wrongful arrest and detention of Randall Reid, constituting harm to his personal liberty and a violation of his rights. The incident is a clear example of AI malfunction and misuse causing real harm. The article also discusses broader systemic issues and legal implications, but the core event is a realized harm caused by AI, fitting the definition of an AI Incident.
Thumbnail Image

Facial recognition fail lands wrong man behind bars for SIX days despite glaring discrepancies

2023-01-03
Conservative News Today
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of facial recognition technology, an AI system, which malfunctioned by misidentifying an innocent person as a criminal suspect. This led to wrongful detention, a clear harm to the individual's rights and liberty. The article also highlights racial bias concerns, reinforcing the violation of rights. The harm has already occurred, so this is not a hazard or complementary information but an AI Incident.
Thumbnail Image

Police in the US used facial recognition technology to arrest a man. The tech was wrong

2023-01-03
The Star
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the use of facial recognition technology (an AI system) by law enforcement to identify suspects. The technology produced a false positive match, which directly caused the wrongful arrest and detention of an innocent person, constituting harm to his rights and liberty. This meets the definition of an AI Incident because the AI system's malfunction directly led to harm to a person. The article also discusses broader concerns about bias and misuse of facial recognition, but the core event is a realized harm due to AI use.
Thumbnail Image

Facial recognition error led to wrongful arrest of Black man, report says

2023-01-04
Ars Technica
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of facial recognition AI systems by police to identify suspects. The wrongful arrest was directly caused by an incorrect facial recognition match, which is a malfunction or misuse of the AI system. The harm includes wrongful detention and violation of the individual's rights, fulfilling the criteria for an AI Incident under violations of human rights or breach of legal protections. The report also notes known biases in facial recognition systems against minorities, reinforcing the AI system's role in the harm. Hence, this is an AI Incident.
Thumbnail Image

Facial recognition tool leads to wrongful arrest | Boing Boing

2023-01-04
Boing Boing
Why's our monitor labelling this an incident or hazard?
The event explicitly involves a facial recognition AI system that misidentified Randal Reid, leading to his wrongful arrest. This is a direct harm to the individual's rights and personal liberty, fulfilling the criteria for an AI Incident under violations of human rights or breach of legal protections. The racial bias and flawed algorithmic design are contributing factors to the harm. The AI system's malfunction and misuse by law enforcement directly caused the wrongful arrest, making this an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Facial recognition tool led to mistaken arrest, lawyer says

2023-01-02
Market Beat
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of facial recognition technology, an AI system, which directly led to the mistaken arrest of Randall Reid. The harm is realized as the wrongful deprivation of liberty and the racial disparity in misidentification, which constitutes a violation of rights and harm to the individual. The AI system's malfunction or bias in identification is a direct contributing factor to the incident. Hence, this event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Facial Recognition Software Leads to Mistaken Arrest of Georgia Man

2023-01-03
Redstate
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system—facial recognition software—used by law enforcement to identify suspects. The software's erroneous match led to the wrongful arrest of Randall Reid, causing harm to his liberty and potentially his reputation, which constitutes harm to a person. The racial bias in the AI system's performance is also a violation of rights and harms communities by disproportionately affecting minorities. The harm is realized, not just potential, making this an AI Incident rather than a hazard. The event stems from the AI system's use and its malfunction (misidentification).
Thumbnail Image

Man Accuses Cops of Throwing Him in Jail Based on False Facial Recognition Match

2023-01-05
Futurism
Why's our monitor labelling this an incident or hazard?
The event involves the use of a facial recognition AI system by police, which falsely matched the individual as a suspect. This led directly to his wrongful arrest and nearly a week-long incarceration, constituting harm to the person. The case also highlights systemic issues such as racial bias in AI tools and lack of safeguards, which are violations of rights and cause harm. Therefore, this qualifies as an AI Incident because the AI system's malfunction and misuse directly led to harm.
Thumbnail Image

Facial Recognition Leads Louisiana Cops to Arrest of Innocent Man

2023-01-03
The Root
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (facial recognition technology) used by law enforcement that directly led to the false arrest of an innocent person, constituting harm to the individual's rights and liberty. The racial bias in the AI system's performance is a contributing factor to this harm. Therefore, this qualifies as an AI Incident because the AI system's use directly caused harm through wrongful arrest and racial discrimination.
Thumbnail Image

False Match That Led to Arrest Highlights Danger of Facial Recognition

2023-01-03
Common Dreams
Why's our monitor labelling this an incident or hazard?
The event involves a facial recognition AI system used by law enforcement that directly caused harm by falsely identifying an innocent person, leading to his wrongful arrest and detention. This meets the definition of an AI Incident because the AI system's malfunction directly led to harm to a person (wrongful imprisonment and associated emotional and social harms). The article also discusses the broader implications and risks of such technology, but the primary focus is on the realized harm from the misidentification and arrest. Hence, it is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Louisiana Police Wrongfully Arrests Black Man Because of Facial Recognition Error

2023-01-05
Tech Times
Why's our monitor labelling this an incident or hazard?
The police used facial recognition technology, an AI system, to identify the suspect. The system's error caused the wrongful arrest of Randal Reid, who was not the actual perpetrator. This wrongful arrest and detention for nearly a week is a clear harm to the individual's rights and well-being. The incident exemplifies how AI system malfunction can lead to violations of human rights and harm to a person, fitting the definition of an AI Incident.
Thumbnail Image

Facial Recognition Tech Leads to Incorrect Arrest, Lawyer Argues

2023-01-03
Tech Times
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of facial recognition technology by law enforcement that led to the wrongful arrest of Randall Reid. This wrongful arrest constitutes a violation of rights and harm to the individual. The AI system's malfunction or bias in misidentifying Reid was a direct cause of the harm. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use directly led to harm to a person and a violation of rights.
Thumbnail Image

What's facial recognition technology, and how do police use it? 5 things to know

2023-01-02
NOLA
Why's our monitor labelling this an incident or hazard?
Facial recognition technology is an AI system that analyzes biometric data to identify individuals. The article explicitly mentions a false arrest caused by the technology, which constitutes harm to individuals (harm to persons). This harm is directly linked to the use of the AI system by law enforcement, fulfilling the criteria for an AI Incident. The article also discusses broader issues of regulation and potential misuse, but the presence of realized harm (false arrest) makes this an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Louisiana authorities used facial recognition technology to wrongfully arrest Black man, lawyer says

2023-01-04
Face2Face Africa
Why's our monitor labelling this an incident or hazard?
Facial recognition technology is an AI system used here to identify suspects. Its biased and inaccurate outputs led to the wrongful arrest and detention of Randall Reid, causing harm to his rights and personal liberty. This constitutes a violation of fundamental rights and is a clear example of harm caused by AI system use. Therefore, this event qualifies as an AI Incident due to direct harm caused by the AI system's malfunction or biased outputs.
Thumbnail Image

Facial Recognition Error Led To Wrongful Arrest Of Innocent Black Man

2023-01-03
Essence
Why's our monitor labelling this an incident or hazard?
The event involves the use of a facial recognition AI system by law enforcement, which misidentified the individual leading to his wrongful arrest and detention. This constitutes a direct harm to the person's rights and well-being, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, and the AI system's malfunction or misuse is pivotal in causing the incident.
Thumbnail Image

Facial recognition tool led to mistaken arrest, lawyer says - WTOP News

2023-01-02
WTOP
Why's our monitor labelling this an incident or hazard?
Facial recognition technology is an AI system used here for law enforcement identification. The wrongful arrest due to misidentification is a direct harm to the individual's rights and wellbeing, fulfilling the criteria for an AI Incident. The article explicitly links the AI system's use to the mistaken arrest and highlights racial disparities in its accuracy, reinforcing the harm caused. Therefore, this event is classified as an AI Incident.
Thumbnail Image

Wrong facial recognition led to jail claims US man's lawyer

2023-01-03
TheRegister.com
Why's our monitor labelling this an incident or hazard?
The facial recognition software, an AI system, was used by police to identify suspects. It produced a false positive match, leading to the wrongful arrest and nearly a week-long detention of Randall Reid. This is a direct harm to the individual caused by the AI system's malfunction or error. The event clearly meets the criteria for an AI Incident as the AI system's use directly led to harm to a person.
Thumbnail Image

Ayo TeKKKnology: Black Man Falsely Arrested For Theft Of $10,000 Of Louis Vuitton And Chanel Bags Based On Janky Facial Recognition Software

2023-01-05
Bossip
Why's our monitor labelling this an incident or hazard?
The event explicitly involves facial recognition software, an AI system, used in policing. The software's malfunction or misidentification directly led to the wrongful arrest and detention of Randall Reid, causing harm to his personal freedom and potentially his rights. This meets the criteria for an AI Incident as the AI system's use directly caused harm to a person.
Thumbnail Image

Facial recognition tool led to mistaken arrest, lawyer says

2023-01-02
Times Union
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of a facial recognition tool by law enforcement that erroneously linked Randall Reid to crimes he did not commit, resulting in his wrongful arrest and detention. Facial recognition is an AI system as it involves automated biometric identification using machine learning. The harm here is a violation of individual rights and wrongful imprisonment, which fits the definition of harm to a person. The AI system's malfunction or misuse directly caused this harm. Hence, this event is classified as an AI Incident.
Thumbnail Image

Facial recognition tool led to mistaken arrest, lawyer says

2023-01-02
The Buffalo News
Why's our monitor labelling this an incident or hazard?
Facial recognition technology is an AI system used here for law enforcement identification. The mistaken arrest of Randall Reid due to erroneous AI identification caused direct harm to him, fulfilling the criteria for an AI Incident. The article details the misuse and malfunction (misidentification) of the AI system leading to realized harm, not just potential harm. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

An innocent man was just arrested after yet another facial recognition failure - SiliconANGLE

2023-01-05
SiliconANGLE
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (facial recognition technology) explicitly used by law enforcement. The wrongful arrest and detention of an innocent man constitute harm to the individual's rights and well-being, fulfilling the criteria for harm to persons. The AI system's failure directly led to this harm, making it an AI Incident. The racial bias and wrongful arrests reported further support the classification as an incident rather than a hazard or complementary information.
Thumbnail Image

Innocent man arrested after yet another facial recognition failure - SiliconANGLE

2023-01-05
SiliconANGLE
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (facial recognition technology) used by police to identify suspects. The system malfunctioned by producing a false positive, leading to the wrongful arrest and detention of an innocent person. This constitutes direct harm to the individual's health and well-being (stress, potential job loss) and a violation of rights (wrongful arrest). The article explicitly links the AI system's failure to these harms, meeting the criteria for an AI Incident.
Thumbnail Image

Black Man Jailed After An Algorithm Falsely Accuses Him Of Stealing De

2023-01-04
News One
Why's our monitor labelling this an incident or hazard?
The event involves a facial recognition AI system used by law enforcement to identify suspects. The system malfunctioned or was biased, leading to a false identification and wrongful arrest of Randall Reid. This caused direct harm to his liberty and is a violation of his rights. The systemic bias in the algorithm against Black people further supports classification as an AI Incident due to violation of rights and harm to the individual and community. Therefore, this qualifies as an AI Incident under the framework.
Thumbnail Image

Georgia Man Arrested for a Crime In a State He Has Never Visited Based on Facial Recognition Tool: 'Can't Tell Black People Apart'

2023-01-03
Atlanta Black Star
Why's our monitor labelling this an incident or hazard?
The event involves the use of a facial recognition AI system by police, which misidentified the individual leading to his wrongful arrest and detention. This constitutes a violation of human rights and a breach of legal protections due to the AI system's biased and inaccurate outputs. The harm is realized and directly linked to the AI system's malfunction or misuse, fitting the definition of an AI Incident.
Thumbnail Image

Facial recognition tool led to mistaken arrest, lawyer says

2023-01-02
Omaha.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of facial recognition technology, an AI system, which directly led to the mistaken arrest of Randall Reid. This wrongful arrest constitutes harm to the individual (a person) and implicates violations of rights due to racial bias in the AI system's performance. Therefore, this qualifies as an AI Incident because the AI system's use directly caused harm through misidentification and wrongful detention.
Thumbnail Image

Facial recognition tool led to mistaken arrest, lawyer says | Federal News Network

2023-01-02
Federal News Network
Why's our monitor labelling this an incident or hazard?
The article explicitly states that facial recognition technology was used by authorities and that it led to the mistaken arrest of Randall Reid. The harm is realized, as Reid was wrongfully jailed due to misidentification by the AI system. The harm includes violation of rights and racial disparities in AI performance, which are recognized harms under the framework. The AI system's malfunction (misidentification) directly caused the incident. Hence, this is an AI Incident.
Thumbnail Image

Mistaken arrest in Georgia triggered by false facial recognition match in different state | Biometric Update

2023-01-03
Biometric Update
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (facial recognition technology) whose malfunction (false match) directly led to harm—wrongful arrest and detention—constituting a violation of rights and harm to the individual. The AI system's role is pivotal in causing the harm, meeting the criteria for an AI Incident. The racial bias and systemic issues further underline the significance of the harm caused.
Thumbnail Image

Georgia man is falsely arrested after facial recognition tech gets it wrong

2023-01-05
Reclaim The Net
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (facial recognition technology) whose malfunction (misidentification) directly led to the wrongful arrest of a person, causing harm to their liberty and rights. This fits the definition of an AI Incident because the AI system's use directly caused harm to a person (harm to health and rights). The racial bias and inaccuracy issues further underline the AI system's role in the harm.
Thumbnail Image

Face ID Tech Causes Police to Jail Innocent Black Men, But the U.S. Has No Laws Restricting It

2023-01-03
The New Civil Rights Movement
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of facial recognition technology, an AI system, by police to identify suspects. The technology's misidentification directly caused the wrongful arrest and week-long imprisonment of an innocent person, which is a clear harm to health and liberty (a form of injury/harm to a person). The article also highlights racial bias issues inherent in the AI system, further supporting the classification as an AI Incident. The absence of laws restricting this technology's use increases the risk of repeated harm but does not negate the fact that harm has already occurred. Hence, this is an AI Incident, not merely a hazard or complementary information.
Thumbnail Image

Another Facial Recognition Snafu Leads to False Arrest, Wrongful Imprisonment; ACLU Asks Lawmakers to Ban Police Use

2023-01-06
SGT Report
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system—facial recognition technology—used by police that made a mistaken identification, resulting in the wrongful arrest and imprisonment of an innocent person. This is a direct harm to the individual's rights and liberty, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, and the AI system's malfunction or error is pivotal in causing the incident.
Thumbnail Image

Louisiana Police Nab Innocent Man After Facial Recognition Fail

2023-01-04
The Crime Report
Why's our monitor labelling this an incident or hazard?
Facial recognition software is an AI system used here for law enforcement identification. The wrongful arrest and detention of an innocent person due to a false match by the AI system constitutes direct harm to the individual's rights and liberty, which falls under violations of human rights and harm to the person. Therefore, this event qualifies as an AI Incident because the AI system's malfunction directly led to harm.
Thumbnail Image

Black Man Jailed After An Algorithm Falsely Accuses Him Of Stealing De

2023-01-04
The Urban Daily
Why's our monitor labelling this an incident or hazard?
The event involves the use of a facial recognition AI system by police, which falsely identified Randall Reid as a suspect, leading to his wrongful arrest and incarceration. This constitutes direct harm to an individual (harm to health and liberty) and a violation of rights due to biased AI outputs. The AI system's malfunction and biased design are pivotal in causing this harm, meeting the criteria for an AI Incident.