Predictive Policing AI Geolitica Fails, Causes Harm and Bias in Plainfield

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

The predictive policing AI Geolitica, used by Plainfield police in New Jersey, was found to be accurate in less than 1% of cases, leading to ineffective policing and reinforcing biases against minority communities. Studies and police feedback highlight its failure and the resulting harm to targeted populations.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves an AI system explicitly described as predictive policing software used by law enforcement. The system's use directly led to harm by disproportionately targeting minority communities, which constitutes a violation of rights and harm to communities. The software's failure to accurately predict crimes also resulted in ineffective policing, potentially exacerbating inequalities and undermining public safety. These outcomes meet the criteria for an AI Incident as the AI system's use has directly led to harm. The article also references prior investigations highlighting bias and misuse, reinforcing the classification as an incident rather than a hazard or complementary information.[AI generated]
AI principles
AccountabilityFairnessHuman wellbeingRespect of human rightsRobustness & digital securitySafetyTransparency & explainabilityDemocracy & human autonomy

Industries
Government, security, and defence

Affected stakeholders
General public

Harm types
Human or fundamental rightsPsychologicalReputationalPublic interest

Severity
AI incident

Business function:
Compliance and justice

AI system task:
Forecasting/predictionOrganisation/recommenders


Articles about this incident or hazard

Thumbnail Image

Police prédictive : ce logiciel n'arrivait pas à prédire les crimes dans 99 % des cas

2023-10-04
01net
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as predictive policing software used by law enforcement. The system's use directly led to harm by disproportionately targeting minority communities, which constitutes a violation of rights and harm to communities. The software's failure to accurately predict crimes also resulted in ineffective policing, potentially exacerbating inequalities and undermining public safety. These outcomes meet the criteria for an AI Incident as the AI system's use has directly led to harm. The article also references prior investigations highlighting bias and misuse, reinforcing the classification as an incident rather than a hazard or complementary information.
Thumbnail Image

Les logiciels de prédiction de crimes sont totalement inefficaces et renforcent les préjugés, selon cette étude

2023-10-03
PhonAndroid
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (crime prediction software) whose deployment has directly led to harm: ineffective policing efforts wasting public funds and, more importantly, biased targeting of minority communities, which constitutes a violation of rights and harm to communities. The AI system's outputs have caused or contributed to these harms, fulfilling the criteria for an AI Incident.
Thumbnail Image

Une étude affirme que les logiciels de police prédictive ne parviennent pas à prédire les crimes et relance le débat sur l'efficacité des algorithmes de prédiction de la criminalité

2023-10-04
Developpez.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (predictive policing software Geolitica) whose use has directly led to harms including ineffective policing, disproportionate targeting of minority communities, and potential wrongful arrests, which constitute violations of rights and harm to communities. The study's findings and expert commentary confirm that the AI system's malfunction and poor predictive performance have caused these harms. Hence, this qualifies as an AI Incident under the OECD framework, as the AI system's use has directly and indirectly led to significant harms.
Thumbnail Image

Un logiciel de prédiction des crimes utilisé par un service de police aux États-Unis se trompe à 99%

2023-10-03
korii.
Why's our monitor labelling this an incident or hazard?
The AI system (Geolitica) was explicitly used by the police for crime prediction. Its use led to significant harm: ineffective policing due to inaccurate predictions and disproportionate targeting of minority and low-income neighborhoods, which is a violation of rights and harm to communities. These harms are direct consequences of the AI system's outputs and use. Therefore, this event qualifies as an AI Incident under the framework, as the AI system's use directly led to harm to communities and violations of rights.
Thumbnail Image

Geolitica, l'outil de police prédictive, se trompe dans 99 % des cas

2023-10-04
L'ADN
Why's our monitor labelling this an incident or hazard?
Geolitica is an AI system used for predictive policing, which is explicitly mentioned. The article details its use and the direct harms caused by its inaccurate predictions, including social discrimination and harmful policing practices that affect communities. These harms fall under violations of rights and harm to communities, fulfilling the criteria for an AI Incident. The article also references regulatory responses recognizing the risks of such AI systems. Therefore, this event is best classified as an AI Incident.
Thumbnail Image

32

2023-10-04
developpez.net
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (predictive policing software Geolitica) whose use has directly led to ineffective policing and biased targeting of minority communities, which are harms to communities and violations of rights. The study documents realized harms such as disproportionate patrols and wrongful arrests, and the police's own acknowledgment of the software's ineffectiveness and potential negative consequences. This meets the criteria for an AI Incident because the AI system's use has directly and indirectly caused harm.