AI Facial Recognition in Sao Paulo Leads to Mistaken Arrests

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Sao Paulo's Smart Sampa AI facial recognition system, used by police to identify fugitives via 40,000 cameras, has led to thousands of arrests. However, over 8% of those detained were released due to identification errors, resulting in wrongful arrests and violations of individual rights.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions the use of an AI facial-recognition system for law enforcement, which is an AI system by definition. The system's use has directly caused harm through mistaken arrests and wrongful detentions, which are violations of human rights and legal protections. The harms are realized and documented, not merely potential. Hence, the event meets the criteria for an AI Incident due to the direct link between the AI system's use and harm to individuals' rights and freedoms.[AI generated]
AI principles
FairnessRespect of human rights

Industries
Government, security, and defence

Affected stakeholders
General public

Harm types
Human or fundamental rights

Severity
AI incident

Business function:
Compliance and justice

AI system task:
Recognition/object detection


Articles about this incident or hazard

Thumbnail Image

Sao Paulo AI policing nabs criminals, and a few innocents

2026-03-17
The Star
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI facial-recognition system for law enforcement, which is an AI system by definition. The system's use has directly caused harm through mistaken arrests and wrongful detentions, which are violations of human rights and legal protections. The harms are realized and documented, not merely potential. Hence, the event meets the criteria for an AI Incident due to the direct link between the AI system's use and harm to individuals' rights and freedoms.
Thumbnail Image

Sao Paulo AI policing nabs criminals, and a few innocents

2026-03-17
eNCAnews
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI facial-recognition system (Smart Sampa) that scans images from thousands of cameras to identify fugitives and criminals. While it has successfully apprehended many offenders, it has also caused mistaken arrests, implying harm to innocent individuals. This harm falls under violations of human rights and harm to persons. The AI system's use is directly linked to these harms, fulfilling the criteria for an AI Incident.
Thumbnail Image

Sao Paulo AI policing nabs criminals, and a few innocents

2026-03-17
Yahoo
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as a large-scale facial-recognition system used by police to identify and arrest individuals. The system's use has directly led to harm, including wrongful arrests and detentions of innocent people, which are violations of human rights and legal rights. The article provides concrete examples of such harms, including an elderly man mistaken for a rapist and psychiatric patients wrongfully detained. These harms are materialized and directly linked to the AI system's operation and errors, fulfilling the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Sao Paulo AI policing nabs innocent people

2026-03-18
Taipei Times
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI facial recognition system that has led to the wrongful arrest and detention of innocent people, which is a direct harm to individuals' rights and liberty. The system's errors caused at least 59 detainees to be released after mistaken identity, and other wrongful arrests occurred due to outdated warrants or misidentifications. This meets the definition of an AI Incident as the AI system's use has directly led to violations of human rights and harm to individuals. The presence of the AI system is explicit, and the harm is realized, not just potential. Hence, the classification is AI Incident.
Thumbnail Image

Sao Paulo AI policing nabs criminals, and a few innocents

2026-03-17
Owensboro Messenger-Inquirer
Why's our monitor labelling this an incident or hazard?
The AI system (facial recognition) is explicitly mentioned and is used in law enforcement to identify fugitives. The fact that more than 8 percent of those arrested had to be released due to errors shows that the AI system's outputs led to wrongful arrests, which constitute violations of human rights and legal protections. This harm is realized and directly linked to the AI system's use, qualifying the event as an AI Incident.
Thumbnail Image

Sao Paulo AI policing nabs criminals, and a few innocents

2026-03-17
The Anniston Star
Why's our monitor labelling this an incident or hazard?
The facial-recognition system is an AI system used for law enforcement purposes. The mention of mistaken arrests shows that the AI system's use has caused harm to innocent individuals, which constitutes a violation of human rights. Therefore, this event qualifies as an AI Incident due to the direct harm caused by the AI system's use in policing.
Thumbnail Image

Sao Paulo AI policing nabs criminals, and a few innocents

2026-03-17
The Grand Junction Daily Sentinel
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI facial recognition system (Smart Sampa) that scans public and private cameras to identify fugitives and criminals. The system's errors have caused wrongful arrests and detentions, which are harms to individuals' rights and freedoms, fitting the definition of harm to human rights or breach of legal protections. The AI system's malfunction or misidentification is a direct contributing factor to these harms. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Sao Paulo AI policing nabs criminals, and a few innocents

2026-03-17
RTL Today
Why's our monitor labelling this an incident or hazard?
The Smart Sampa system is an AI facial recognition system actively used in policing, which has directly led to arrests and detentions. The article reports concrete cases of mistaken arrests and wrongful detentions caused by errors in the AI system's identification, which is a direct harm to individuals' rights and freedom. This meets the definition of an AI Incident as the AI system's use and malfunction have directly led to harm to persons and violations of rights. The concerns about algorithmic bias and misuse further support the classification as an incident rather than a mere hazard or complementary information.
Thumbnail Image

Smart Sampa: La revolucionaria, pero polémica, red de vigilancia digital en Sao Paulo

2026-03-17
El Comercio Perú
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system (facial recognition by AI) used in real-time surveillance and law enforcement. The system's use has directly led to harm: wrongful arrests and detentions, which are violations of human rights and legal protections. The harms are materialized and documented, including specific cases of mistaken identity and detentions based on outdated warrants. The system's errors and potential biases further exacerbate these harms. Thus, the event meets the criteria for an AI Incident because the AI system's use has directly caused harm to individuals and communities through violations of rights and wrongful imprisonment.
Thumbnail Image

El "Gran Hermano" con IA de Sao Paulo que arresta a delincuentes y a algunos inocentes

2026-03-17
France 24
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as a facial recognition system using AI for real-time identification and law enforcement. The system's use has directly caused harm by leading to wrongful arrests and detentions, which are violations of human rights and legal rights. The article provides concrete examples of such harms, including an elderly man wrongfully detained and patients mistakenly arrested. These harms are materialized and directly linked to the AI system's malfunction or errors. Hence, the event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

El "Gran Hermano" con IA de Sao Paulo que arresta a delincuentes y a algunos inocentes

2026-03-18
France 24
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (facial recognition with AI) whose use has directly led to harm: wrongful arrests and detentions of innocent people, which are violations of human rights and legal protections. The system's errors have caused real harm to individuals, fulfilling the criteria for an AI Incident. The article also discusses the system's role in reducing crime, but the wrongful arrests and errors are significant harms directly linked to the AI system's malfunction or misuse. Hence, the classification as AI Incident is appropriate.
Thumbnail Image

Sao Paulo despliega vigilancia con IA y desata debate por errores

2026-03-17
Montevideo Portal / Montevideo COMM
Why's our monitor labelling this an incident or hazard?
The Smart Sampa system is an AI system using facial recognition technology combined with judicial databases to identify suspects in real time. Its use has directly caused harm through wrongful detentions (over 8% error rate, including mistaken identity and outdated warrants), which constitute violations of human rights and civil liberties. The system's deployment in public and private spaces and its impact on individuals' freedom and rights meet the criteria for an AI Incident. The article reports realized harm, not just potential risk, so it is not merely a hazard or complementary information.
Thumbnail Image

Smart Sampa vigila a millones: el sistema de IA de reconocimiento facial que captura delincuentes y a inocentes en Brasil

2026-03-18
La Nación, Grupo Nación
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI facial recognition system (Smart Sampa) that monitors millions of people and leads to arrests. It reports concrete harms: wrongful detentions of innocent people due to misidentification by the AI system and outdated judicial data, causing harm to individuals' liberty and rights. These harms are direct consequences of the AI system's use and malfunction. The involvement of AI is clear and central to the event, and the harms are realized, not just potential. Hence, this is an AI Incident.
Thumbnail Image

Smart Sampa: el masivo sistema de vigilancia por IA que divide a São Paulo

2026-03-18
El Financiero, Grupo Nación
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as a facial recognition system using AI for real-time identification and surveillance. The system's use has directly caused harm through wrongful arrests and detentions, which constitute violations of human rights and fundamental freedoms. The article provides concrete examples of these harms, including mistaken identity leading to detention and the use of outdated arrest warrants. These harms fall under the definition of an AI Incident, as the AI system's malfunction or errors have directly led to injury to persons' rights and freedoms. The concerns about racial bias and civil control further support the classification as an AI Incident due to violations of rights and harm to communities. Therefore, the event is best classified as an AI Incident.