Pregnant Woman Wrongfully Arrested After Faulty Facial Recognition Match in Detroit

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Porcha Woodruff, a Black woman eight months pregnant, was wrongfully arrested by Detroit police for carjacking after facial recognition AI misidentified her. The incident caused her physical and emotional harm, highlighting the technology’s flaws, especially in identifying people of color. Woodruff is now suing the city for false arrest.[AI generated]

Why's our monitor labelling this an incident or hazard?

Facial recognition software is an AI system used by police to identify suspects. In this case, the software misidentified Porcha Woodruff, leading to her wrongful arrest. This is a direct harm to an individual caused by the AI system's use, fitting the definition of an AI Incident due to violation of rights and harm to a person.[AI generated]
AI principles
AccountabilityFairnessPrivacy & data governanceRespect of human rightsRobustness & digital securitySafetyTransparency & explainabilityDemocracy & human autonomy

Industries
Government, security, and defenceDigital security

Affected stakeholders
WomenGeneral public

Harm types
Physical (injury)PsychologicalHuman or fundamental rightsPublic interestReputational

Severity
AI incident

Business function:
Compliance and justice

AI system task:
Recognition/object detection

In other databases

Articles about this incident or hazard

Thumbnail Image

États-Unis : la reconnaissance faciale par la police de nouveau mise en cause après une arrestation

2023-08-07
Le Figaro.fr
Why's our monitor labelling this an incident or hazard?
Facial recognition software is an AI system used by police to identify suspects. In this case, the software misidentified Porcha Woodruff, leading to her wrongful arrest. This is a direct harm to an individual caused by the AI system's use, fitting the definition of an AI Incident due to violation of rights and harm to a person.
Thumbnail Image

Aux USA, la communauté noire reste la première victime des erreurs de la reconnaissance faciale

2023-08-08
Clubic.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of facial recognition AI by police leading to wrongful arrests, which is a direct harm to the individuals involved, particularly violating their rights and causing personal harm. The disproportionate impact on the Black community highlights a systemic issue linked to the AI system's malfunction or bias. Therefore, this qualifies as an AI Incident due to realized harm caused by the AI system's use.
Thumbnail Image

Etats-Unis: Echec de la reconnaissance faciale lors d'une opération policière

2023-08-07
Le Matin
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system (facial recognition technology) in a police operation and the harm caused by its malfunction or bias, which can lead to violations of human rights and potential wrongful police actions. The failure of the AI system to correctly identify individuals, especially due to biased training data, constitutes an AI Incident as it has directly led to harm or risk of harm to individuals' rights and potentially their safety.
Thumbnail Image

Aux Etats-Unis, la reconnaissance faciale désigne une femme innocente

2023-08-08
20minutes
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (facial recognition software) used by police to identify suspects. The system misidentified an innocent woman, leading to her wrongful arrest and detention, causing physical and psychological harm. This is a direct harm caused by the AI system's malfunction or misuse. The harm includes violation of rights and health harm (stress-induced contractions during pregnancy). Therefore, it meets the criteria for an AI Incident.
Thumbnail Image

Etats-Unis: une femme noire affirme avoir été arrêtée à tort à cause de la reconnaissance faciale

2023-08-07
BFMTV
Why's our monitor labelling this an incident or hazard?
The facial recognition system is an AI system as it performs automated comparison and ranking of faces to identify suspects. Its use directly led to the wrongful arrest and detention of Porcha Woodruff, causing harm to her health and rights. The event involves the use of the AI system leading to realized harm (wrongful arrest, detention, and distress), fitting the definition of an AI Incident. The systemic issue of multiple wrongful arrests based on this technology further supports this classification.
Thumbnail Image

Aux Etats-Unis, une femme noire enceinte de huit mois victime des biais racistes de la reconnaissance faciale

2023-08-07
Libération
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI facial recognition system by police, which misidentified Porcha Woodruff, leading to her wrongful arrest and detention. This caused direct harm to her health (spasms, panic attack, dehydration) and psychological distress, as well as a violation of her rights. The AI system's biased design and deployment are central to the incident. Therefore, this qualifies as an AI Incident due to direct harm caused by the AI system's malfunction and biased outputs.
Thumbnail Image

Reconnaissance faciale : une femme enceinte fait de la prison pour rien, le logiciel s'est trompé

2023-08-08
PhonAndroid
Why's our monitor labelling this an incident or hazard?
The police used a facial recognition AI system to identify suspects from surveillance footage. The system incorrectly matched the suspect to Porcha Woodruff, who was not at the crime scene. This led to her wrongful arrest, 11 hours in custody, and significant personal and financial harm. The AI system's malfunction (false positive identification) directly caused these harms. Additionally, the racial bias in the system's errors raises concerns about violations of rights. Hence, this event meets the criteria for an AI Incident.
Thumbnail Image

La reconnaissance faciale de nouveau mise en cause aux États-Unis

2023-08-07
TVA Nouvelles
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of facial recognition AI by police to identify a suspect, which led to the wrongful arrest of Porcha Woodruff. The harm includes physical stress causing contractions during pregnancy, emotional distress, and violation of rights due to misidentification by a biased AI system. This fits the definition of an AI Incident because the AI system's use directly led to harm and rights violations. The article also references known biases in the technology, reinforcing the AI system's role in the harm.
Thumbnail Image

États-Unis - Échec de la reconnaissance faciale lors d'une opération policière

2023-08-07
Tribune de Genève
Why's our monitor labelling this an incident or hazard?
The police used an AI-based facial recognition system to identify a suspect, which erroneously matched the plaintiff, leading to her wrongful arrest and detention. This caused direct harm to her health and violated her rights, fulfilling the criteria for an AI Incident. The event involves the use and malfunction of an AI system resulting in realized harm, not just potential harm or complementary information.
Thumbnail Image

Une femme enceinte de Détroit poursuit la ville après avoir été faussement arrêtée à cause de la reconnaissance faciale, incitant la police de Detroit à mettre fin aux contrôles faciaux

2023-08-09
Developpez.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system—facial recognition technology—by law enforcement to identify suspects. The wrongful arrest based on this technology directly caused harm to the individual (emotional distress, potential pregnancy complications) and implicates violations of rights. The article details actual harm that has occurred, not just potential harm, and the AI system's malfunction (misidentification) is a direct contributing factor. Therefore, this qualifies as an AI Incident under the OECD framework.
Thumbnail Image

Aux États-Unis, une femme noire accusée à tort d'un crime à cause de la reconnaissance faciale

2023-08-08
madmoiZelle.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (facial recognition technology) by law enforcement, which directly led to harm: wrongful arrest, physical and psychological harm to the individual, and violation of her rights. The AI system's known bias against Black individuals and the police's failure to mitigate these risks caused the incident. Therefore, this qualifies as an AI Incident under the framework, as the AI system's malfunction and misuse directly caused harm to a person and violated fundamental rights.
Thumbnail Image

Accusée à tort par un algorithme de reconnaissance faciale

2023-08-11
Next INpact.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of a facial recognition AI system that misidentified Porcha Woodruff, leading to her wrongful arrest and detention. This constitutes a direct harm to her human rights and health (physical and emotional harm), fulfilling the criteria for an AI Incident. The involvement of the AI system in the wrongful accusation and subsequent harm is clear and direct.
Thumbnail Image

Pregnant woman's arrest in carjacking case spurs call to end Detroit police facial recognition - KION546

2023-08-07
KION546
Why's our monitor labelling this an incident or hazard?
Facial recognition technology is an AI system used here in law enforcement. Its use directly led to the wrongful arrest of a person, causing emotional harm, which qualifies as injury or harm to a person. This is a direct harm caused by the AI system's use, thus constituting an AI Incident. The lawsuit and criticism highlight the harm caused by the AI system's malfunction or misuse (misidentification).
Thumbnail Image

US-court-tech

2023-08-07
nampa.org
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system—facial recognition technology—that directly led to a false arrest, which constitutes harm to the individual (violation of rights and personal harm). The AI system's malfunction or unreliability caused this harm, fitting the definition of an AI Incident.
Thumbnail Image

Pregnant woman's arrest in carjacking case spurs call to end Detroit police facial recognition

2023-08-07
Yahoo News
Why's our monitor labelling this an incident or hazard?
Facial recognition technology is an AI system used here for suspect identification. The wrongful arrest based on this technology's misidentification caused emotional distress and potential health risks to a pregnant woman, fulfilling the criteria for harm to a person. The lawsuit and multiple similar cases highlight systemic issues with the AI system's accuracy and its impact on human rights, specifically wrongful arrest and emotional harm. Therefore, this event is classified as an AI Incident due to direct harm caused by the AI system's use.
Thumbnail Image

Detroit police chief says 'poor investigative work' led to arrest of Black mom who claims facial recognition technology played a role

2023-08-10
Yahoo News
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system—facial recognition technology—used in a law enforcement context. The AI system's outputs were used as part of the investigation and identification process, which directly led to the wrongful arrest of a person, constituting harm to an individual (harm to health and rights). The lawsuit alleges racial discrimination linked to the AI system's higher error rates for Black individuals, indicating a violation of rights. The police chief acknowledges policy violations related to the use of facial recognition in the investigation. Therefore, the event meets the criteria for an AI Incident because the AI system's use directly led to harm (wrongful arrest and alleged rights violations).
Thumbnail Image

Woman Sues Detroit for Arresting Her Based on Wrong Facial Recognition Match

2023-08-07
Yahoo News
Why's our monitor labelling this an incident or hazard?
Facial recognition technology is an AI system used by law enforcement in this case. The wrongful identification and subsequent arrest of Porcha Woodruff, a pregnant Black woman, caused direct harm including risk to her health and violation of her rights. The lawsuit and case dropping confirm the AI system's role in causing harm. Therefore, this qualifies as an AI Incident due to direct harm caused by the AI system's malfunction or misuse.
Thumbnail Image

Pregnant woman arrested after facial recognition tech error

2023-08-07
Yahoo News
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (facial recognition technology) whose malfunction (false positive match) directly caused the wrongful arrest and detention of Porcha Woodruff, leading to physical harm and violation of her rights. The article details realized harm, including health consequences and legal charges dismissed due to insufficient evidence. The systemic bias and repeated similar incidents further support classification as an AI Incident. Therefore, this qualifies as an AI Incident under the framework because the AI system's malfunction directly led to harm and rights violations.
Thumbnail Image

Detroit woman sues city after being falsely arrested while 8 months pregnant due to facial recognition technology

2023-08-06
Yahoo News
Why's our monitor labelling this an incident or hazard?
The facial recognition technology is an AI system used by the police detective to identify suspects. Its erroneous match led to the wrongful arrest of Porcha Woodruff, causing physical and psychological harm (stress-induced contractions and dehydration) and a violation of her rights. The AI system's malfunction or misuse is a direct factor in the harm experienced, meeting the criteria for an AI Incident under the definitions provided.
Thumbnail Image

Detroit woman suing police after 'shoddy' AI facial recognition leads to false arrest during her pregnancy

2023-08-08
Yahoo News
Why's our monitor labelling this an incident or hazard?
An AI system (facial recognition software) was explicitly used by police to identify a suspect, which directly led to the wrongful arrest of Porcha Woodruff. This caused harm to her health (stress during pregnancy), emotional distress, and harm to her family (children witnessing the arrest). The AI system's malfunction (incorrect match) was a pivotal factor in the incident. Therefore, this qualifies as an AI Incident under the framework, as it involves direct harm caused by the use and malfunction of an AI system in law enforcement.
Thumbnail Image

Mom Claims Bogus Facial Recognition Led To False Arrest While 8 Months Pregnant

2023-08-07
Yahoo News
Why's our monitor labelling this an incident or hazard?
Facial recognition technology is an AI system used here for suspect identification. Its erroneous output directly caused the wrongful arrest and subsequent physical and emotional harm to Woodruff, including jeopardizing her pregnancy. This constitutes injury and harm to a person due to AI system malfunction. Therefore, this event qualifies as an AI Incident under the framework.
Thumbnail Image

Black mom sues city of Detroit claiming she was falsely arrested while 8 months pregnant by officers using facial recognition technology | CNN

2023-08-07
CNN International
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system—facial recognition technology—used by police to identify a suspect. The AI system's malfunction (an unreliable facial recognition match) directly led to the wrongful arrest of Porcha Woodruff, causing physical harm (stress-induced contractions) and emotional harm (humiliation, embarrassment). The lawsuit also highlights racial bias in the AI system, which is a violation of civil rights. These factors meet the criteria for an AI Incident, as the AI system's use directly caused harm to a person and implicates violations of rights under applicable law.
Thumbnail Image

Eight Months Pregnant and Arrested After False Facial Recognition Match

2023-08-06
The New York Times
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system (facial recognition technology) whose malfunction (false match) directly caused harm to a person, including wrongful arrest, physical and emotional distress, and violation of rights. The harm is realized and significant, including health impacts and legal consequences. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use directly led to harm to a person and violation of rights.
Thumbnail Image

In every reported case where police mistakenly arrested someone using facial recognition, that person has been Black

2023-08-06
Business Insider
Why's our monitor labelling this an incident or hazard?
Facial recognition technology is an AI system used by police departments to identify suspects. The article reports multiple false arrests caused by faulty facial recognition, all involving Black individuals, indicating racial bias and harm. This misuse and malfunction of the AI system has directly led to violations of rights and harm to individuals and communities, fitting the definition of an AI Incident.
Thumbnail Image

Pregnant woman wrongly charged with robbery over facial recognition

2023-08-07
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system—facial recognition technology—used by police to identify suspects. The AI system's erroneous match directly caused the wrongful arrest and charging of Porcha Woodruff, leading to physical harm (medical distress and contractions), psychological harm (panic attack), and violation of her rights (wrongful imprisonment). The harm is realized and significant. Therefore, this qualifies as an AI Incident under the framework, as the AI system's malfunction directly led to harm and rights violations.
Thumbnail Image

False facial recognition leaves MI pregnant woman wrongfully arrested

2023-08-07
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (facial recognition technology) in law enforcement that directly caused harm to a person through wrongful arrest and associated physical and emotional distress. This fits the definition of an AI Incident because the AI system's use directly led to harm to a person and a violation of rights. The article also references similar cases and systemic issues with facial recognition, reinforcing the classification as an incident rather than a hazard or complementary information.
Thumbnail Image

Detroit Woman Sues City Police After Being Wrongfully Arrested Due To AI Facial Recognition

2023-08-07
Forbes
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (facial recognition) whose use by law enforcement directly caused harm to a person through wrongful arrest. The harm includes physical and psychological effects, and the incident reflects known issues with AI facial recognition accuracy and bias. Therefore, it meets the criteria for an AI Incident as the AI system's malfunction directly led to harm.
Thumbnail Image

Woman falsely arrested while eight months pregnant due to faulty facial recognition tech sues Detroit

2023-08-07
Yahoo Sports
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system—facial recognition technology—used by police to identify suspects. The technology's malfunction (faulty identification) directly led to the false arrest and detention of Porcha Woodruff, causing physical and emotional harm. This fits the definition of an AI Incident because the AI system's use directly led to harm (injury to health, violation of rights). The lawsuit and police investigation further confirm the seriousness of the incident.
Thumbnail Image

Lawsuit filed after facial recognition tech causes wrongful arrest of pregnant woman

2023-08-08
Yahoo Sports Canada
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (facial recognition technology) in law enforcement that directly led to the wrongful arrest of a pregnant woman, causing physical harm, emotional distress, and violations of civil rights. The harm is realized and significant, meeting the criteria for an AI Incident under the definitions provided. The AI system's malfunction or flawed use was a contributing factor to the harm experienced by the individual.
Thumbnail Image

Black mother sues Detroit claiming she was falsely arrested at 8 months pregnant due to facial ID tech

2023-08-08
Yahoo Sports Canada
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly mentioned as facial recognition software used by law enforcement. The AI system's use directly caused harm to the individual through a false arrest, which constitutes injury to health and a violation of rights. The harm is realized and documented, not hypothetical. Therefore, this qualifies as an AI Incident under the framework, as the AI system's malfunction and biased outputs led to direct harm and rights violations.
Thumbnail Image

In every reported case where police mistakenly arrested someone using facial recognition, that person has been Black

2023-08-06
Yahoo Sports Canada
Why's our monitor labelling this an incident or hazard?
The event involves the use of facial recognition AI systems by police departments, which have directly led to false arrests and racial discrimination against Black people. This constitutes harm to individuals' rights and communities, fulfilling the criteria for an AI Incident. The harm is realized and ongoing, not merely potential, and the AI system's malfunction and biased development are central to the incident.
Thumbnail Image

_Remorseless_ former police officer jailed for his role in George Floyd's death_Original Video_m237287.mp4

2023-08-08
Yahoo Sports Canada
Why's our monitor labelling this an incident or hazard?
Facial recognition technology is an AI system used by law enforcement in this case. The wrongful arrest based on this technology's output caused harm to Porcha Woodruff and her family, including emotional trauma and violation of her rights. The lawsuit and public outcry indicate the AI system's malfunction or misuse directly led to this harm, fitting the definition of an AI Incident.
Thumbnail Image

Pregnant US Woman, Arrested After False Facial Recognition Match, Sues Police

2023-08-07
NDTV
Why's our monitor labelling this an incident or hazard?
The article explicitly states that facial recognition technology was used by the police to identify Ms. Woodruff as a suspect, which was an unreliable match leading to her wrongful arrest and detention. The harm includes physical and emotional distress during detention while pregnant, as well as legal and reputational harm. The AI system's malfunction (false match) directly led to these harms, fulfilling the criteria for an AI Incident under the framework.
Thumbnail Image

Detroit woman sues city over false carjacking arrest while 8 months...

2023-08-07
New York Post
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of facial recognition technology, which is an AI system. The technology's erroneous identification directly led to the wrongful arrest and detention of Porcha Woodruff, causing physical and emotional harm. The charges were later dropped due to insufficient evidence, confirming the AI system's role in the harm. Therefore, this is an AI Incident as the AI system's malfunction directly caused harm to a person.
Thumbnail Image

Detroit woman sues city after being falsely arrested while 8-months pregnant due to facial recognition technology

2023-08-06
NBC News
Why's our monitor labelling this an incident or hazard?
The facial recognition technology, an AI system, was used in the investigation and produced an unreliable match that led to the false arrest of Porcha Woodruff. This wrongful arrest caused direct harm to her health (stress-induced contractions and dehydration) and violated her rights. The AI system's malfunction (unreliable match) was a contributing factor to these harms, meeting the criteria for an AI Incident.
Thumbnail Image

Woman Sues City For Arresting Her While Pregnant Based On False Facial Recognition

2023-08-07
The Daily Caller
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system—facial recognition technology—used in law enforcement. The wrongful arrest and detention of Porcha Woodruff, based on a false AI match, constitutes direct harm to her health and rights, fulfilling the criteria for an AI Incident. The harm includes physical health impact (hospitalization), violation of constitutional rights (Fourth Amendment), and wrongful imprisonment. The AI system's flawed performance, particularly its racial bias, is central to the incident. Therefore, this is classified as an AI Incident.
Thumbnail Image

Black mother sues Detroit claiming wrongful arrest while pregnant due to face ID tech

2023-08-08
The Independent
Why's our monitor labelling this an incident or hazard?
The facial recognition software is an AI system used by law enforcement to identify suspects. Its malfunction or bias led to a wrongful arrest, causing physical and emotional harm to the individual, which qualifies as injury to a person (harm to health) and violation of rights. The event is a clear AI Incident because the AI system's use directly caused harm and legal consequences. The lawsuit and public concern further emphasize the harm caused by the AI system's biased outputs.
Thumbnail Image

Detroit woman sues city over false arrest while 8 months pregnant due to faulty facial recognition

2023-08-07
CBS News
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (facial recognition technology) whose malfunction or misuse directly caused harm to a person (false arrest, stress-induced health issues). The harm is realized and significant, including violation of rights and physical health impacts. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

ACLU of Michigan calls on Detroit police to stop use of facial recognition technology after woman claims she was falsely accused of crime

2023-08-08
CBS News
Why's our monitor labelling this an incident or hazard?
Facial recognition technology is an AI system used for identifying suspects. The false arrests due to misidentification demonstrate direct harm to individuals' rights and well-being, fulfilling the criteria for an AI Incident. The event describes realized harm (wrongful arrests) caused by the AI system's malfunction or misuse, including failure to properly investigate beyond AI outputs. The disproportionate impact on Black communities further supports the classification as harm to human rights. The ongoing lawsuits and calls for policy changes are responses to this incident but do not change the classification of the event itself.
Thumbnail Image

Detroit police plan facial recognition policy changes after false arrest lawsuit

2023-08-10
CBS News
Why's our monitor labelling this an incident or hazard?
Facial recognition technology is an AI system used here to generate investigative leads. The false arrest of Porcha Woodruff is a direct harm to her rights and personal liberty, fulfilling the criteria for an AI Incident. The AI system's role was indirect but pivotal, as it provided the suspect photo that led to the wrongful identification. The article details actual harm caused, not just potential harm, and the police department's response confirms the incident's significance. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

A Detroit woman who says she was falsely arrested while 8 months pregnant says she spent 11 hours on a concrete jail bench before being released. She says a bogus facial recognition match is to blame.

2023-08-07
Insider
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system—facial recognition software—whose malfunction (false match) directly led to the wrongful arrest and detention of Porcha Woodruff. The harms include physical and psychological injury, violation of rights, and distress to her family. The AI system's role is pivotal as it was the basis for the police action. Therefore, this qualifies as an AI Incident under the framework, as the AI system's malfunction directly caused significant harm to a person.
Thumbnail Image

AI facial recognition falsely identifies pregnant woman as a wanted criminal, she sues police

2023-08-09
India Today
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly mentioned as facial recognition software used by police to identify suspects. The AI system's use directly led to the wrongful arrest of a pregnant woman, causing physical harm (dehydration, panic attack) and emotional distress, as well as legal and human rights violations. The incident is part of a pattern of similar false arrests, indicating systemic harm. The AI system's malfunction or flawed outputs were pivotal in causing the harm, fulfilling the criteria for an AI Incident.
Thumbnail Image

Pregnant woman wrongfully accused of robbery, carjacking because of 'unreliable' facial recognition technology: Lawsuit

2023-08-07
TheBlaze
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (facial recognition technology) whose malfunction (unreliable match) directly caused the wrongful arrest of Porcha Woodruff, leading to physical and emotional harm. The AI system's erroneous output was a pivotal factor in the incident, fulfilling the criteria for an AI Incident as it caused injury to a person and a violation of rights. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Eight months pregnant and arrested after false facial recognition match

2023-08-07
The Straits Times
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system (facial recognition technology) used by law enforcement. The false match led directly to the wrongful arrest and detention of Ms. Woodruff, causing harm to her health and violating her rights. This fits the definition of an AI Incident because the AI system's malfunction directly led to harm (a) injury or harm to a person, and (c) violation of rights. The harm is realized, not just potential, and the AI system's role is pivotal in the chain of events.
Thumbnail Image

Innocent pregnant woman jailed amid faulty facial recognition trend

2023-08-07
Ars Technica
Why's our monitor labelling this an incident or hazard?
The facial recognition software is an AI system used by police to identify suspects. Its malfunction—incorrectly matching Woodruff's image to surveillance footage—directly led to her wrongful arrest, detention, and trauma, fulfilling the criteria for harm to a person. The article also notes multiple similar incidents and systemic racial bias, reinforcing the AI system's role in causing violations of rights and harm to individuals. The involvement of the AI system in the development, use, and malfunction stages is clear, and the harm is realized, not just potential. Hence, this is classified as an AI Incident.
Thumbnail Image

Experts remain divided on facial recognition technology despite another wrongful arrest

2023-08-08
NJ.com
Why's our monitor labelling this an incident or hazard?
Facial recognition technology is an AI system used by law enforcement in investigations. The wrongful arrests and associated harms (false imprisonment, emotional distress) are direct consequences of the AI system's malfunction or misuse. The disproportionate impact on marginalized communities constitutes a violation of rights. Therefore, this event qualifies as an AI Incident due to realized harm caused by the AI system's use.
Thumbnail Image

I was wrongly arrested after major AI blunder - now I'm suing police

2023-08-08
The US Sun
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (facial recognition technology) used by law enforcement to identify a suspect. The technology's flawed identification directly caused the wrongful arrest of Porcha Woodruff, resulting in significant personal harm including emotional distress, physical health complications during pregnancy, and legal repercussions. This meets the definition of an AI Incident because the AI system's use directly led to harm to a person. The lawsuit and public concern further confirm the incident's significance.
Thumbnail Image

Woman sues Detroit after facial recognition mistakes her for crime suspect - The Boston Globe

2023-08-07
The Boston Globe
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of facial recognition software (an AI system) that mistakenly identified the plaintiff, leading to her false arrest and detention. This caused direct harm to her health (stress, panic attack, dehydration), violation of her Fourth Amendment rights, and emotional distress. The AI system's malfunction (erroneous match) was a pivotal factor in these harms. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use directly led to violations of rights and harm to a person.
Thumbnail Image

Faulty facial recognition lands pregnant woman in jail for carjacking | Boing Boing

2023-08-07
Boing Boing
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (facial recognition) whose malfunction (faulty match) directly caused harm to a person (wrongful arrest, physical and emotional distress). This constitutes a violation of rights and harm to the individual, fitting the definition of an AI Incident. The article details realized harm, not just potential risk, and thus it is not merely a hazard or complementary information.
Thumbnail Image

Here's What It's Like To Be Falsely Arrested via Facial Recognition

2023-08-07
Futurism
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (facial recognition software) used by police that directly led to the wrongful arrest of an innocent person, causing harm to her health and violating her rights. This fits the definition of an AI Incident because the AI system's use directly caused harm (wrongful arrest, health issues during detention) and a violation of rights. The article also references multiple similar cases, reinforcing the classification as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Pregnant woman's arrest in carjacking case spurs call to end Detroit police facial recognition

2023-08-07
National Post
Why's our monitor labelling this an incident or hazard?
Facial recognition technology is an AI system used here for suspect identification. Its use directly led to the wrongful arrest, causing emotional and potential physical harm to the woman, which qualifies as harm to a person. The wrongful arrest and subsequent distress constitute an AI Incident because the AI system's malfunction or misuse directly caused harm. The involvement of the AI system in the wrongful arrest and the resulting harm meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Lawsuit filed after facial recognition tech causes wrongful arrest of pregnant woman

2023-08-07
Detroit Free Press
Why's our monitor labelling this an incident or hazard?
The facial recognition technology is an AI system used by police to identify suspects. Its flawed output directly caused the wrongful arrest and detention of a pregnant woman, resulting in physical and emotional harm. The event clearly involves harm to a person due to the AI system's malfunction and misuse, including violations of civil rights. This meets the definition of an AI Incident as the AI system's use directly led to harm.
Thumbnail Image

Detroit alters facial recognition tech policy after lawsuit filed by pregnant woman

2023-08-09
Detroit Free Press
Why's our monitor labelling this an incident or hazard?
The facial recognition technology is an AI system used to generate investigative leads. Its use led to a false match that contributed to the wrongful arrest of Porcha Woodruff, causing harm to her rights and well-being. Although the police chief states that the technology itself did not violate policy and that the main fault was poor police work, the AI system's output was pivotal in the chain of events leading to harm. Therefore, this event meets the criteria for an AI Incident due to indirect harm caused by the AI system's use and subsequent human error.
Thumbnail Image

Police Make Wrongful Arrest Based on Bad Facial Recognition... Again

2023-08-07
PetaPixel
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system—facial recognition software—used by police for identifying suspects. The wrongful arrests and detentions directly caused harm to individuals' health, liberty, and rights, fulfilling the criteria for an AI Incident. The technology's malfunction (high misidentification rate) and its use have directly led to these harms. Therefore, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Woman Sues Detroit for Arresting Her Based on Wrong Facial Recognition Match

2023-08-07
Jezebel
Why's our monitor labelling this an incident or hazard?
The event involves the use of facial recognition technology, an AI system, which produced an inaccurate match leading to the wrongful arrest of Porcha Woodruff. The wrongful arrest caused direct harm including physical injury due to stress-induced contractions during pregnancy, as well as humiliation and violation of rights. The AI system's malfunction is a direct contributing factor to these harms, fulfilling the criteria for an AI Incident.
Thumbnail Image

Deadass? AI Wrongfully Accused A Pregnant Black Woman of Carjacking

2023-08-08
The Root
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (facial recognition technology) whose erroneous output directly caused harm to a person, including wrongful arrest and physical and psychological injury. The racial bias embedded in the AI system is a key factor in the harm. This fits the definition of an AI Incident because the AI system's malfunction and use led directly to violations of rights and harm to health.
Thumbnail Image

Pregnant woman's arrest for carjacking spurs lawsuit, call to end facial recognition

2023-08-08
mlive
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of facial recognition technology, an AI system, which was used to identify the woman incorrectly, resulting in her false arrest. This constitutes a violation of rights and harm to the individual, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, as the wrongful arrest and legal consequences occurred.
Thumbnail Image

She Thought Her Arrest Was a 'Prank.' It Wasn't

2023-08-07
Newser
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system—facial recognition software—whose erroneous output directly caused harm to a person, including false arrest and imprisonment. This fits the definition of an AI Incident because the AI system's malfunction led to violations of rights and harm to the individual. The harm is realized and significant, including legal, physical, and emotional consequences. Therefore, this is classified as an AI Incident.
Thumbnail Image

Woman files lawsuit, claims 'faulty' DPD facial recognition hit prompted her false arrest

2023-08-06
The Detroit News
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (facial recognition software) by law enforcement, which malfunctioned by producing an unreliable match. This malfunction directly led to the false arrest of the plaintiff, causing harm to her health, emotional well-being, and legal rights. The lawsuit also highlights systemic issues with the technology's higher misidentification rates for Black citizens, indicating a violation of rights. Therefore, this qualifies as an AI Incident because the AI system's malfunction and use directly caused harm and legal violations.
Thumbnail Image

Detroit Police Department seeks changes after suit over facial recognition-related arrest

2023-08-10
The Detroit News
Why's our monitor labelling this an incident or hazard?
The facial recognition software is an AI system used to generate investigative leads. Its use directly contributed to the wrongful arrest, which constitutes harm to the individual's rights and personal liberty, fitting the definition of an AI Incident. The event describes realized harm caused indirectly by the AI system's output combined with poor investigative practices. The police department's response and policy changes are complementary information but do not negate the incident classification. Therefore, this is an AI Incident due to the realized harm linked to the AI system's use.
Thumbnail Image

Black Mom Wrongfully Arrested While 8 Months Pregnant Due To Faulty Facial Recognition | Essence

2023-08-08
Essence
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (facial recognition technology) in law enforcement. The wrongful arrest caused harm to the individual, including physical and emotional distress, especially given her pregnancy. The AI system's faulty match was a direct factor leading to the harm, fulfilling the criteria for an AI Incident. The event is not merely a potential risk or a complementary update but a concrete case of harm caused by AI misuse or malfunction.
Thumbnail Image

'We have to do better': Detroit police chief says wrongful arrest of pregnant woman was improper

2023-08-09
WDIV
Why's our monitor labelling this an incident or hazard?
Facial recognition technology is an AI system used here to identify suspects. The wrongful arrest of the pregnant woman based on this technology constitutes harm to her rights and well-being, fulfilling the criteria for an AI Incident. The police chief acknowledges the misuse and has implemented policy changes, but the harm has already occurred due to the AI system's role in the wrongful arrest.
Thumbnail Image

Facial recog error leads to lawsuit for Detroit Police

2023-08-08
TheRegister.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (facial recognition technology) by law enforcement, which directly led to the wrongful arrest and detention of Porcha Woodruff, causing physical harm (health risks during pregnancy), emotional distress, and violations of constitutional and civil rights. The AI system's erroneous output was a pivotal factor in the harm experienced. The article also references similar prior incidents, reinforcing the systemic nature of the harm. This fits the definition of an AI Incident as the AI system's use directly caused harm to a person and violated rights.
Thumbnail Image

Detroit woman sues city after being falsely arrested while 8 months pregnant due to facial recognition technology

2023-08-07
NBC Chicago
Why's our monitor labelling this an incident or hazard?
The event involves the use of facial recognition technology, an AI system, which was used in the identification process leading to the false arrest of Porcha Woodruff. The harm is direct, as the AI system's erroneous output caused a wrongful arrest, impacting her personal rights and well-being. This fits the definition of an AI Incident because the AI system's use directly led to harm to a person.
Thumbnail Image

Facial recognition technology misidentified Black woman as suspect in robbery and carjacking -- while 8 months pregnant: Lawsuit

2023-08-08
Law & Crime
Why's our monitor labelling this an incident or hazard?
Facial recognition technology is an AI system used here for suspect identification. Its malfunction (misidentification) directly led to wrongful arrest, physical and emotional harm, and legal consequences for Porcha Woodruff. The event involves harm to a person (health and emotional distress) and alleged violation of rights (racial discrimination and false arrest). These harms are directly linked to the AI system's use and errors, fulfilling the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

US mom blames facial recognition tech for flawed arrest

2023-08-07
The South African
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (facial recognition technology) whose use by law enforcement directly led to a false arrest, constituting harm through violation of rights and personal injury (emotional and physical distress). The AI system's flawed identification was a pivotal factor in the incident. Therefore, this qualifies as an AI Incident under the framework, as the harm has materialized and is directly linked to the AI system's use.
Thumbnail Image

Facial recognition error led to pregnant Black woman's arrest in Detroit

2023-08-07
TheGrio
Why's our monitor labelling this an incident or hazard?
The facial recognition system is an AI system used by the police to identify suspects. Its erroneous match led directly to the wrongful arrest and detention of a pregnant woman, causing physical harm (stress, dehydration, contractions) and emotional trauma, as well as violations of her rights. The harm is realized and directly linked to the AI system's malfunction. This fits the definition of an AI Incident due to injury to health and violation of rights caused by the AI system's use.
Thumbnail Image

US mom blames face recognition tech for flawed arrest - Jamaica Observer

2023-08-07
Jamaica Observer
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system—facial recognition technology—used by police to identify a suspect. The technology's erroneous output directly led to the false arrest of Porcha Woodruff, causing harm to her personal liberty and well-being. This fits the definition of an AI Incident because the AI system's use directly caused harm (a false arrest) and violated rights (potentially human rights related to due process and discrimination).
Thumbnail Image

Detroit police in deep water after using erroneous facial recognition to arrest pregnant black woman - SiliconANGLE

2023-08-08
SiliconANGLE
Why's our monitor labelling this an incident or hazard?
The facial recognition technology, an AI system, was used in the police investigation and directly led to the wrongful arrest and detention of Porcha Woodruff, causing significant harm to her health and rights. The harm is realized and directly linked to the AI system's malfunction or erroneous output. Therefore, this event qualifies as an AI Incident under the framework, as it involves harm to a person due to the use of an AI system.
Thumbnail Image

Detroit police in deep water after arresting pregnant black woman due to facial recognition error - SiliconANGLE

2023-08-08
SiliconANGLE
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system—facial recognition technology—used by police that directly led to the false arrest of a person, causing physical harm (stress, dehydration requiring hospital treatment) and violation of rights. The harm is realized and directly linked to the AI system's malfunction (false positive match). Therefore, this qualifies as an AI Incident under the framework.
Thumbnail Image

Pregnant Woman's False Arrest in Detroit Shows "Racism Gets Embedded" in Facial Recognition Technology

2023-08-07
Democracy Now!
Why's our monitor labelling this an incident or hazard?
Facial recognition software is an AI system used by police in this case. The wrongful arrest and detention of Porcha Woodruff, based on a false match by the AI system, directly caused harm to her health and violated her rights. The article also notes systemic racial bias embedded in the technology, which has led to multiple similar harms. This meets the criteria for an AI Incident because the AI system's malfunction and use directly caused harm to a person and violated rights.
Thumbnail Image

Meet Porcha Woodruff, Detroit Woman Jailed While 8 Months Pregnant After False AI Facial Recognition

2023-08-09
Democracy Now!
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system (facial recognition technology) whose malfunction (false match) directly led to harm (wrongful arrest, stress-induced health risks) and a violation of rights (false imprisonment, malicious prosecution). The harm is realized and significant, meeting the criteria for an AI Incident. The systemic bias aspect further underscores the violation of fundamental rights. Therefore, this event is classified as an AI Incident.
Thumbnail Image

Detroit Facial Recognition Software Results in Wrongful Arrest of Pregnant Woman

2023-08-07
Truthout
Why's our monitor labelling this an incident or hazard?
The facial recognition software is an AI system used by the Detroit Police Department. Its malfunction and biased outputs have directly caused wrongful arrests, which constitute harm to individuals' rights and physical well-being. The wrongful arrest of Porcha Woodruff, including her being held during contractions, is a clear injury and violation of rights. The systemic racial bias and repeated misidentifications further demonstrate ongoing harm. Therefore, this event qualifies as an AI Incident due to direct harm caused by the AI system's use and malfunction.
Thumbnail Image

'I Was Scared': Detroit Woman Wrongly Arrested While 8 Months Pregnant for Carjacking, Robbery Due to Botched Facial Recognition Technology

2023-08-08
Atlanta Black Star
Why's our monitor labelling this an incident or hazard?
The event explicitly involves facial recognition technology, an AI system, used by police to identify suspects. The technology malfunctioned by producing an unreliable match, which directly led to the wrongful arrest and detention of Porcha Woodruff, causing physical and emotional harm. This wrongful arrest constitutes a violation of fundamental rights and harm to the individual. The article also notes similar prior incidents, reinforcing the systemic nature of the harm. Hence, this is an AI Incident as the AI system's malfunction directly caused harm to a person and violated her rights.
Thumbnail Image

Black woman matched by facial recognition alleges police misconduct in lawsuit | Biometric Update

2023-08-07
Biometric Update
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of facial recognition AI systems by police, which directly led to the wrongful arrest and detention of a person, causing harm to her health and rights. The false match by the AI system was a pivotal factor in the incident. This fits the definition of an AI Incident because the AI system's malfunction directly caused harm to an individual, including violation of rights and emotional and physical harm.
Thumbnail Image

Detroit Woman Sues City After False Arrest Due To Inaccurate Facial Recognition Software

2023-08-08
Reclaim The Net
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system—facial recognition software—used by law enforcement to identify suspects. The system's inaccurate output directly led to a false arrest, which is a clear harm to an individual (physical and psychological harm). The misuse or malfunction of the AI system in this case caused a violation of rights and harm to the person involved. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Black mom sues city of Detroit claiming she was falsely arrested while 8 months pregnant by officers using facial recognition technology

2023-08-08
Madison365
Why's our monitor labelling this an incident or hazard?
The event involves the use of facial recognition technology, an AI system, which was explicitly used by police to identify the suspect. The AI system's erroneous match directly led to the wrongful arrest of Porcha Woodruff, causing physical harm (stress-induced contractions, health issues), psychological harm (trauma, anxiety), and violation of rights (false arrest, racial discrimination). The lawsuit and the description confirm the AI system's role in causing these harms. Hence, this is an AI Incident as the AI system's malfunction and misuse directly caused significant harm.
Thumbnail Image

The face of injustice: arrested for false facial recognition

2023-08-09
USANews Press Release Network
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly mentioned as facial recognition technology used by police. Its malfunction (misidentification) directly caused harm to Porcha Woodruff, including wrongful arrest, emotional and physical distress, and violation of her rights. The harm is realized and documented, meeting the criteria for an AI Incident. The racial bias and repeated similar cases further emphasize systemic harm linked to the AI system's use.
Thumbnail Image

Carjacking case arrest spurs call for police to end facial recognition

2023-08-07
KOAA
Why's our monitor labelling this an incident or hazard?
The facial recognition technology is an AI system used by the police to identify suspects. Its malfunction or misapplication directly led to the wrongful arrest of Porcha Woodruff, causing emotional distress and potential health harm during pregnancy. The lawsuit and multiple similar cases highlight systemic issues with the technology's accuracy and bias, fulfilling the criteria for an AI Incident involving harm to individuals and violations of rights. The event is not merely a potential risk or complementary information but a realized harm caused by AI use.
Thumbnail Image

Detroit Sued: Pregnant Woman Wrongfully Arrested with Facial Recognition

2023-08-08
La Voce di New York
Why's our monitor labelling this an incident or hazard?
Facial recognition technology is an AI system used here for suspect identification. Its malfunction or misuse led to the wrongful arrest of Porcha Woodruff, causing harm to her health and rights, especially given her pregnancy. The lawsuit highlights the direct link between the AI system's unreliable match and the wrongful arrest, fulfilling the criteria for an AI Incident due to harm to a person and violation of rights.
Thumbnail Image

Error-prone facial recognition leads to another wrongful arrest

2023-08-07
AI News
Why's our monitor labelling this an incident or hazard?
Facial recognition technology is an AI system used by law enforcement for identification. The wrongful arrest of Porcha Woodruff due to inaccurate facial recognition is a direct harm to her rights and wellbeing, fulfilling the criteria for an AI Incident. The article explicitly links the harm to the AI system's flawed outputs and its use by the police department. The racial bias and repeated wrongful arrests further underscore the systemic nature of the harm caused by this AI system's malfunction and misuse.
Thumbnail Image

Carjacking case arrest spurs call for police to end facial recognition

2023-08-08
East Oregonian
Why's our monitor labelling this an incident or hazard?
Facial recognition technology is an AI system used here for law enforcement identification. Its malfunction or misapplication caused a false arrest, leading to emotional distress and potential health harm to the woman, as well as broader concerns about racial bias and wrongful arrests. The harm is realized and directly linked to the AI system's use, meeting the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

China drafts rules for using facial recognition data

2023-08-08
SpaceWar
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of facial recognition technology in identifying a suspect, which led to the wrongful arrest of Porcha Woodruff. The technology's flaws, particularly its higher error rates for people of color, contributed to this harm. The wrongful arrest and subsequent legal action constitute injury and violation of rights, directly linked to the AI system's malfunction or misuse. Hence, this is an AI Incident under the framework.
Thumbnail Image

Faulty facial recognition lands pregnant woman in jail for carjacking -...

2023-08-08
Quinta’s weblog
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (facial recognition) used by law enforcement that falsely identified an innocent person as a suspect, resulting in her wrongful arrest and detention. The harm includes physical injury (dehydration, contractions), emotional distress, and violation of rights. The AI system's malfunction was a direct cause of these harms, meeting the criteria for an AI Incident under the framework.
Thumbnail Image

US mom blames face recognition tech for flawed arrest

2023-08-08
HT Tech
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (facial recognition technology) whose use by law enforcement led to a false arrest, a clear harm to the individual's rights and well-being. The technology's known flaws, especially in identifying people of color, are central to the incident. The harm has already occurred (false arrest and legal charges), making this an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Woman arrested over faulty facial recognition match hits back, sues police

2023-08-07
HT Tech
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of facial recognition technology, which qualifies as an AI system under the definitions provided. The wrongful arrest and false accusation caused direct harm to the individual, including physical and emotional harm, and a violation of her rights. The malfunction or inaccuracy of the AI system was a direct contributing factor to the harm. Therefore, this qualifies as an AI Incident because the AI system's use directly led to harm to a person and violation of rights.
Thumbnail Image

Tech failure: When false facial recognition match lands you in jail

2023-08-08
DT next
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system—automated facial recognition technology—used by police to identify suspects. The technology's erroneous matches directly led to wrongful arrests, causing harm to individuals' health (stress, physical pain), violations of their rights (wrongful arrest, seizure of property), and legal consequences. This fits the definition of an AI Incident because the AI system's malfunction and use directly caused harm and rights violations. The article also mentions multiple lawsuits and systemic issues, reinforcing the classification as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Carjacking case arrest spurs call for police to end facial recognition

2023-08-07
Scripps News
Why's our monitor labelling this an incident or hazard?
Facial recognition technology is an AI system used here for suspect identification. Its use directly led to the wrongful arrest of Porcha Woodruff, causing emotional distress and potential health harm, fulfilling the criteria for harm to a person. The event involves the AI system's use and malfunction (misidentification), leading to a violation of rights and harm. The article details realized harm, not just potential, and the AI system's role is pivotal in the incident. Hence, this is classified as an AI Incident.
Thumbnail Image

Pregnant woman's arrest in carjacking case spurs call to end...

2023-08-07
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
Facial recognition technology is an AI system used here for suspect identification. Its malfunction or misuse directly led to the false arrest of a pregnant woman, causing emotional distress and potential health harm, which fits the definition of an AI Incident. The event describes realized harm (false arrest, emotional distress, risk to health) directly linked to the AI system's use. The lawsuit and calls to end the use of this technology further confirm the incident's nature. Hence, the classification as AI Incident is appropriate.
Thumbnail Image

Pregnant Woman Falsely Accused of Carjacking By Facial Recognition Glitch

2023-08-08
PEOPLE.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system—facial recognition technology—used by law enforcement to identify suspects. The technology malfunctioned by misidentifying Woodruff, leading directly to her wrongful arrest and detention, which caused physical and emotional harm, including health risks during pregnancy. The harm includes violation of rights and wrongful criminal charges, fulfilling the criteria for an AI Incident. The article details realized harm, not just potential harm, and the AI system's role is pivotal in causing this harm. Thus, the classification as AI Incident is appropriate.
Thumbnail Image

Detroit police changing facial-recognition policy after pregnant woman says she was wrongly charged

2023-08-11
Jamaica Gleaner
Why's our monitor labelling this an incident or hazard?
Facial-recognition technology is an AI system used here for suspect identification. Its use directly led to the wrongful charging of a person, which constitutes harm to the individual's rights and potentially their wellbeing. The incident involves misuse or overreliance on AI outputs leading to a violation of rights and wrongful legal action. Therefore, this qualifies as an AI Incident due to realized harm caused by the AI system's use in law enforcement.
Thumbnail Image

Detroit police changing facial-recognition policy after pregnant woman says she was wrongly charged

2023-08-11
Roanoke Times
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (facial-recognition technology) whose outputs directly led to harm: wrongful arrest and charging of an innocent person, causing personal and legal harm. This fits the definition of an AI Incident because the AI system's use directly led to harm to a person (wrongful arrest, legal jeopardy, and emotional distress). The police chief's response and policy changes are reactions to this incident but do not change the classification of the event itself.
Thumbnail Image

AI facial recognition led to 8-month pregnant woman's wrongful carjacking arrest in front of kids: lawsuit

2023-08-14
Fox News
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly mentioned as facial recognition technology used by police. The AI system's malfunction (misidentification) directly caused the wrongful arrest and associated harms, including physical and emotional harm to the woman and her children, as well as violations of her rights. This fits the definition of an AI Incident because the AI system's use directly led to harm to a person and violations of rights.
Thumbnail Image

Detroit police changing facial-recognition policy after pregnant woman says she was wrongly charged

2023-08-10
Yahoo! Finance
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system (facial-recognition technology) whose use directly led to harm: wrongful arrest and charging of an innocent person, which constitutes a violation of rights and harm to the individual. The incident is not hypothetical or potential but has already occurred, fulfilling the criteria for an AI Incident. The police response and policy changes are complementary information but do not negate the incident classification.
Thumbnail Image

Detroit changing facial-recognition policy after police allegedly charge wrong woman

2023-08-10
TheGrio
Why's our monitor labelling this an incident or hazard?
Facial-recognition technology is an AI system used here for suspect identification. Its use directly led to wrongful charging and arrest, constituting harm to the individual's rights and wellbeing, which fits the definition of an AI Incident. The article describes realized harm, not just potential harm, and the police response is a reaction to this incident, not the main focus. Therefore, this is classified as an AI Incident.
Thumbnail Image

AI facial recognition led to 8-month pregnant woman's wrongful carjacking arrest in front of kids: lawsuit

2023-08-14
Yahoo News
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (facial recognition) that was used by police to identify a suspect. The AI system's failure to correctly identify the woman led directly to her wrongful arrest, causing physical and emotional harm, including medical complications during pregnancy and distress to her children. The lawsuit and referenced studies confirm the AI's racial bias and unreliability, which are central to the harm caused. This fits the definition of an AI Incident because the AI system's malfunction directly led to harm to a person and violations of rights.
Thumbnail Image

Detroit police changing facial-recognition policy after pregnant...

2023-08-10
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
Facial-recognition technology is an AI system used here to identify suspects. Its use directly led to the wrongful arrest of Porcha Woodruff, causing harm to her rights and wellbeing. The charges were dropped, indicating the harm was recognized. The event involves the use and malfunction (misidentification) of the AI system leading to harm, fitting the definition of an AI Incident. The police response and policy changes are complementary but the core event is the wrongful arrest due to AI misuse.
Thumbnail Image

Detroit woman at center of facial recognition lawsuit responds to police chief's claims

2023-08-10
CBS News
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (facial recognition technology) whose use directly led to a wrongful arrest, constituting harm to the individual involved. The misidentification and subsequent arrest are clear harms linked to the AI system's malfunction or misuse. Therefore, this qualifies as an AI Incident. The policy changes and apologies are complementary but do not change the primary classification.
Thumbnail Image

Detroit police changing facial-recognition policy after pregnant woman says she was wrongly charged

2023-08-11
St. Louis Post-Dispatch
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (facial-recognition technology) whose outputs directly led to harm: wrongful arrest and charging, which is a violation of the individual's rights and caused personal harm. The harm has materialized, and the police are responding with policy changes. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's use in law enforcement leading to a rights violation and wrongful legal action.
Thumbnail Image

Clear dangers of photo tech used by police

2023-08-11
The Orange County Register
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system—automated facial recognition technology—used by police to identify suspects. The system produced a false positive match, leading to the wrongful arrest and detention of Porcha Woodruff, causing harm to her health and violating her rights. This is a clear example of harm caused by the use and malfunction of an AI system, meeting the criteria for an AI Incident due to injury to a person and violation of rights. The article also highlights systemic issues with the technology's deployment and its risks, reinforcing the classification as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Pregnant Woman's Arrest Spurs a Change

2023-08-10
Newser
Why's our monitor labelling this an incident or hazard?
The event describes an AI system (facial-recognition technology) being used in law enforcement that directly led to harm (wrongful arrest and detention) of a person. This fits the definition of an AI Incident because the AI system's use caused a violation of rights and harm to an individual. The article also discusses policy changes as a response, but the primary focus is on the incident of harm caused by the AI system's malfunction or misuse.
Thumbnail Image

Thompson: Detroit's facial recognition technology must go

2023-08-14
The Detroit News
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system—facial recognition technology—by Detroit police, which directly caused harm by wrongly identifying Porcha Woodruff, leading to her wrongful arrest while pregnant. This constitutes a violation of human rights and constitutional rights, fulfilling the criteria for an AI Incident. The article details realized harm, not just potential harm, and discusses the consequences and calls for policy changes, confirming the classification as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Detroit police changing facial-recognition policy after pregnant woman says she was wrongly charged

2023-08-10
Financial Post
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system—facial-recognition technology—whose use directly led to a wrongful arrest, a clear harm to the individual's rights and well-being. This fits the definition of an AI Incident because the AI system's use directly caused harm (wrongful charging/arrest) and violated rights. The policy changes are a response to this incident but do not negate the fact that harm occurred. Therefore, this is classified as an AI Incident.
Thumbnail Image

Pregnant Black Woman The Latest Victim Of Detroit PD Facial Recognition False Positive

2023-08-10
Techdirt
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (facial recognition technology) whose erroneous output directly led to the wrongful arrest and detention of a person, causing harm to her health, emotional well-being, and rights. The Detroit PD's reliance on the AI system's flawed identification without adequate verification constitutes misuse of the AI system. The harm is concrete and has already occurred, including legal charges and physical detention of a pregnant woman. This fits the definition of an AI Incident, as the AI system's malfunction and use directly caused harm to a person and violated her rights.
Thumbnail Image

Detroit police changing facial-recognition policy after pregnant woman says she was wrongly charged

2023-08-11
Omaha.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (facial-recognition technology) in law enforcement. The technology's output directly contributed to the wrongful arrest and charging of a pregnant woman, causing harm to her and violating her rights. The harm is realized, not just potential, and the police chief's response confirms the incident's significance. Therefore, this qualifies as an AI Incident due to direct harm and rights violation caused by the AI system's use.
Thumbnail Image

Detroit police changing facial-recognition policy after pregnant woman says she was wrongly charged

2023-08-11
nwi.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (facial-recognition technology) whose outputs directly led to a wrongful arrest and charging of a pregnant woman, causing harm to her (harm to person). The harm is realized, not just potential. Therefore, this qualifies as an AI Incident. The article also discusses policy responses, but the primary focus is the wrongful arrest caused by the AI system's use.
Thumbnail Image

Detroit police change facial recognition procedure as falsely accused woman sues | Biometric Update

2023-08-10
Biometric Update
Why's our monitor labelling this an incident or hazard?
The facial recognition software is an AI system used by the Detroit police since 2017. Its use in this case directly contributed to the wrongful arrest and detention of Porcha Woodruff, which constitutes harm to a person (a). The incident involves the AI system's use and the procedural misuse of its outputs (photo lineup including the suspect's image), leading to a violation of rights and harm. Therefore, this qualifies as an AI Incident. The procedural changes are a response to the incident, but the main event is the wrongful arrest caused by the AI system's involvement.
Thumbnail Image

Detroit police changing facial-recognition policy after pregnant woman says she was wrongly charged

2023-08-11
Winston-Salem Journal
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (facial-recognition technology) whose outputs directly led to a wrongful arrest and charging of a pregnant woman, causing harm to her (harm to a person). This fits the definition of an AI Incident because the AI system's use directly led to harm and violations of rights. The subsequent policy changes are responses to this incident but do not change the classification of the event itself.
Thumbnail Image

Detroit police changing facial-recognition policy after pregnant woman says she was wrongly charged

2023-08-11
NewsAdvance.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (facial-recognition technology) whose outputs were used in a criminal case leading to wrongful charges and detention of a pregnant woman. This constitutes harm to the health and well-being of a person (a), and a violation of rights (c) due to misidentification. The police chief's policy changes are a response to this incident but do not negate the fact that harm occurred. Therefore, this qualifies as an AI Incident.
Thumbnail Image

Detroit police changing facial-recognition policy after pregnant woman says she was wrongly charged

2023-08-11
Sioux City Journal
Why's our monitor labelling this an incident or hazard?
The event involves the use of facial-recognition technology, which is an AI system, in law enforcement. The technology's output directly led to the wrongful arrest and charging of a pregnant woman, causing harm to her health and rights. The charges were dropped, indicating the harm was recognized. The police chief's response to change policies confirms the AI system's role in the incident. This meets the criteria for an AI Incident because the AI system's use directly led to harm to a person and a violation of rights.
Thumbnail Image

Detroit police changing facial-recognition policy after pregnant woman says she was wrongly charged - KION546

2023-08-10
KION546
Why's our monitor labelling this an incident or hazard?
Facial-recognition technology is an AI system used here in law enforcement. Its use directly led to the wrongful charging and arrest of a person, which constitutes harm to the individual's rights and well-being. This meets the criteria for an AI Incident because the AI system's use directly contributed to a violation of rights and harm to a person. The policy change is a response to this incident but is not the main focus of the article, so the event is classified as an AI Incident rather than Complementary Information.
Thumbnail Image

Detroit Police Changing Facial-Recognition Policy After Pregnant ... - Slashdot

2023-08-11
Business Telegraph
Why's our monitor labelling this an incident or hazard?
The facial-recognition technology, an AI system, was used in the investigation and directly contributed to the wrongful identification and charging of an innocent person, constituting harm through violation of rights and legal harm. The event describes realized harm caused by the AI system's use, qualifying it as an AI Incident. The subsequent policy changes are responses but do not negate the incident classification.
Thumbnail Image

Una mujer embarazada fue arrestada por error tras ser identificada por la tecnología de reconocimiento facial

2023-08-08
The New York Times
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (facial recognition technology) by the police to identify suspects. The system's output directly led to the wrongful arrest of an innocent person, causing harm to her health (stress, dehydration, panic attacks), emotional well-being, and legal rights. The harm is realized and directly linked to the AI system's malfunction or misidentification. Therefore, this qualifies as an AI Incident under the framework, as it involves harm to a person caused by the use of an AI system.
Thumbnail Image

¿Qué pasa cuando la IA se equivoca?

2023-08-08
The New York Times
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems causing direct harm: a politician falsely labeled a terrorist by an AI chatbot, and a woman wrongfully arrested due to AI facial recognition errors. These incidents have led to reputational damage, psychological harm, and legal jeopardy, fulfilling the criteria for AI Incidents. Additionally, the mention of malicious uses of AI (deepfakes, extortion) further supports the classification as AI Incident. The harms are realized, not just potential, and the AI systems' malfunction or misuse is a contributing factor.
Thumbnail Image

La cara de la injusticia: detenida por un falso reconocimiento facial

2023-08-09
LaVanguardia
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system—facial recognition technology—used by the police to identify suspects. The system malfunctioned by falsely matching Woodruff's image, leading to her wrongful arrest and detention, causing direct harm to her health and emotional well-being, as well as a violation of her rights. This fits the definition of an AI Incident because the AI system's malfunction directly caused harm to a person and a violation of rights.
Thumbnail Image

Encarcelan a una mujer embarazada inocente en Estados Unidos por un fallo de reconocimiento facial cada vez más común

2023-08-08
3D Juegos
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (facial recognition technology) by police that directly led to harm: wrongful detention and legal accusations against an innocent person. This constitutes a violation of rights and harm to the individual, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, and the AI system's malfunction or inaccuracy is a contributing factor to the incident.
Thumbnail Image

Arrestan a mujer embarazada por error de software de reconocimiento facial; demandará a la ciudad

2023-08-07
Expansión
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system—facial recognition technology—for law enforcement purposes. The wrongful arrest and detention of Poorcha Woodruff, including physical harm (dehydration, contractions) and psychological harm (panic attack), directly resulted from the AI system's erroneous identification. The systemic bias against Black individuals and the low accuracy rate (4%) further support the classification as an AI Incident. The harm is realized and significant, meeting the criteria for injury to a person and violation of rights due to AI system malfunction and use.
Thumbnail Image

Arrestan a una mujer en Estados Unidos por un error del reconocimiento facial

2023-08-10
Antena3
Why's our monitor labelling this an incident or hazard?
The police used an AI-based facial recognition system to identify the suspect, which directly led to the wrongful arrest and detention of Porcha Woodruff. This caused harm to her health (physical pain, dehydration, panic attack), violation of her rights (wrongful arrest, detention without cause), and emotional distress. The AI system's malfunction or error in identification was a pivotal factor in this incident. Therefore, this qualifies as an AI Incident under the definitions provided.
Thumbnail Image

Pueden devastar errores de IA

2023-08-09
El Diario de Juárez
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (BlenderBot 3, facial recognition tools) whose erroneous outputs have caused harm to real people, including wrongful accusations and arrests, reputational damage, and psychological stress. These harms fall under violations of rights and harm to individuals. The AI systems' malfunction or misuse is a direct contributing factor to these harms. Hence, this qualifies as an AI Incident under the framework, as the AI systems' use and errors have directly led to significant harms.
Thumbnail Image

Una mujer demanda a policía de Detroit por arrestarla con 8 meses de embarazo por un error del software de reconocimiento facial

2023-08-07
Telemundo
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of facial recognition technology by the police to identify the suspect, which was unreliable and led to the wrongful arrest of a pregnant woman. The harm includes physical health issues (low fetal heart rate, contractions due to stress) and legal harm (wrongful detention and charges). The AI system's malfunction or misuse is a direct contributing factor to these harms, fulfilling the criteria for an AI Incident under the OECD framework.
Thumbnail Image

EEUU: Una mujer embarazada fue arrestada por un error de reconocimiento facial

2023-08-09
TiempoSur
Why's our monitor labelling this an incident or hazard?
The article describes a wrongful arrest based on an unreliable facial recognition match, an AI system error that directly led to harm (unlawful arrest and detention) and violation of rights. The involvement of AI in causing this harm is explicit and central to the event. The harm is realized, not just potential, and includes legal and personal consequences for the individual. Hence, it meets the criteria for an AI Incident.
Thumbnail Image

Tras ser detenida por error mientras estaba embarazada, mujer demanda a la ciudad de Detroit

2023-08-07
Telemundo Washington DC (44)
Why's our monitor labelling this an incident or hazard?
The article describes how the police used facial recognition technology to identify the woman as a suspect, which was inaccurate and led to her wrongful arrest. The harm includes physical health effects due to stress and a violation of her rights through false detention. The AI system's malfunction (unreliable facial recognition) directly contributed to these harms, meeting the criteria for an AI Incident.
Thumbnail Image

Mujer de Detroit demanda la ciudad tras ser encarcelada por error

2023-08-07
La Neta Neta
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of facial recognition technology, an AI system, which led to a false identification and wrongful arrest. The harm includes physical health issues (low heart rate, contractions) and emotional distress, directly linked to the AI system's unreliable output. Therefore, this qualifies as an AI Incident because the AI system's malfunction directly caused harm to an individual.
Thumbnail Image

Mujer negra afirma que fue arrestada injustamente debido al reconocimiento facial - Notiulti

2023-08-07
Notiulti
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (facial recognition software) in law enforcement that directly caused harm to a person through wrongful arrest and detention. The harm includes physical and psychological distress, as well as a violation of legal rights. The AI system's erroneous output was a pivotal factor in the incident. Therefore, this qualifies as an AI Incident under the framework, as it directly led to harm and rights violations.
Thumbnail Image

Madre negra demanda a la ciudad de Detroit alegando que fue arrestada falsamente cuando tenía 8 meses de embarazo por oficiales que usaban tecnología de reconocimiento facial - Notiulti

2023-08-08
Notiulti
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of facial recognition AI technology by law enforcement, which directly led to the wrongful arrest of Porcha Woodruff. This arrest caused physical harm (stress-induced contractions, hospitalization), psychological harm (anxiety, trauma), and harm to her family (children's trauma). The lawsuit highlights the unreliability and racial bias of the AI system, which is a direct factor in the harm. Therefore, this qualifies as an AI Incident because the AI system's use directly caused significant harm to a person and implicates violations of rights and harm to health.
Thumbnail Image

El jefe de policía de Detroit dice que el "trabajo de investigación deficiente" condujo al arresto de una madre negra que afirma que la tecnología de reconocimiento facial desempeñó un papel - Notiulti

2023-08-10
Notiulti
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (facial recognition technology) used in law enforcement that directly contributed to a wrongful arrest, causing harm to the individual and alleged racial discrimination. The harm is realized, not just potential, and the AI system's role is pivotal in the incident. This fits the definition of an AI Incident because the AI system's use led to violations of rights and harm to a person. The police chief's statement about poor investigative work does not negate the AI system's involvement in causing harm.
Thumbnail Image

Yapay zeka 6 kişiyi suçsuz yere gözaltına aldırdı

2023-08-09
NTV
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI-based facial recognition technology by police, which led to wrongful arrests. This constitutes harm to individuals (a), including violation of rights (c), as innocent people were detained based on faulty AI outputs. The AI system's malfunction (misidentification) directly caused these harms, qualifying this as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Yapay zekâyla suçsuz yere gözaltı

2023-08-10
Sabah
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (facial recognition software) whose malfunction (incorrect identification) directly led to harm: wrongful arrests and violation of rights. The harm includes physical and psychological distress, especially for a pregnant woman, and systemic issues such as racial bias (all six wrongfully arrested were Black). This fits the definition of an AI Incident because the AI system's use and malfunction directly caused harm to individuals and violated their rights.
Thumbnail Image

Yapay zeka 6 kişiyi suçsuz yere gözaltına aldırdı

2023-08-08
TRT haber
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly mentioned as a facial recognition tool used by police. The AI system's malfunction (misidentification) directly led to the wrongful arrest and detention of innocent people, constituting a violation of human rights and harm to individuals. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use directly caused harm (violation of rights and physical/psychological harm during detention).
Thumbnail Image

ABD'de Bir Kadın Yapay Zekâ Hatasından Gözaltına Alındı

2023-08-09
Webtekno
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system (facial recognition technology) used by law enforcement. The AI system's malfunction (incorrect facial match) directly led to the wrongful arrest and detention of Porcha Woodruff, causing harm to her health and violating her rights. The article also notes this is the sixth known case of wrongful arrest due to AI facial recognition errors, highlighting systemic harm. Therefore, this qualifies as an AI Incident under the definitions provided, as the AI system's malfunction directly caused harm to a person and violated rights.
Thumbnail Image

Yapay zeka 6 kişiyi suçsuz yere gözaltına aldırdı

2023-08-09
Yeni Şafak
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (facial recognition software) whose malfunction (misidentification) directly led to harm (wrongful detention, physical distress during pregnancy, violation of rights). This fits the definition of an AI Incident because the AI system's use caused violations of human rights and harm to persons.
Thumbnail Image

Yapay zekaları bile ırkçı: 6 suçsuz siyahiyi gözaltına aldırdı

2023-08-10
Yeni Şafak
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI facial recognition systems by police that wrongly identified six Black individuals, leading to their unlawful detention. This is a direct harm caused by the AI system's malfunction and biased performance, resulting in violations of fundamental rights and harm to the individuals involved. Therefore, this qualifies as an AI Incident under the framework, as the AI system's malfunction directly led to harm and rights violations.
Thumbnail Image

"Συγγνώμη" από την αστυνομία - Τεχνητή νοημοσύνη αναγνώρισε μαύρη έγκυο ως ύποπτη ληστείας | in.gr

2023-08-11
in.gr
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (facial recognition) whose malfunction (biased misidentification) directly led to harm: wrongful arrest and detention of a person, which constitutes a violation of rights and harm to the individual. The involvement of AI is explicit, and the harm has materialized. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use directly caused harm to a person and a breach of rights.
Thumbnail Image

Γκάφα από την Τεχνητή Νοημοσύνη - Αναγνώρισε έγκυο 8 μηνών ως ύποπτη ληστείας

2023-08-11
newsbomb.gr
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (facial recognition technology) in law enforcement that malfunctioned by misidentifying a suspect, resulting in the wrongful arrest of a pregnant woman. This constitutes direct harm to the individual's rights and well-being, fitting the definition of an AI Incident due to violation of rights and harm to a person caused by the AI system's malfunction and use.
Thumbnail Image

"Συγγνώμη" της αστυνομίας - Τεχνητή νοημοσύνη αναγνώρισε μαύρη έγκυο ως ύποπτη ληστείας

2023-08-11
Cretalive
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (facial recognition algorithm) whose malfunction (biased and inaccurate identification) directly led to harm: wrongful arrest and detention of a person, violating her rights and causing personal harm. This fits the definition of an AI Incident because the AI system's use directly caused harm to an individual, including a violation of fundamental rights. The police response and policy changes are complementary but do not change the classification of the original event as an AI Incident.
Thumbnail Image

"Συγγνώμη" της αστυνομίας του Ντιτρόιτ: Τεχνητή νοημοσύνη αναγνώρισε μαύρη έγκυο ως ύποπτη ληστείας

2023-08-12
PoliceNET of Greece
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly mentioned as used for facial recognition by law enforcement. The AI system's malfunction (misidentification) directly led to harm (wrongful arrest and detention) of a person, fulfilling the criteria for an AI Incident. The harm includes violation of personal rights and physical/psychological harm from wrongful detention. The police response and policy changes are complementary but do not negate the incident classification.