Concerns Over Police AI Facial Recognition System

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Brandenburg data protection officials, including Dagmar Hartge, have raised concerns regarding a police AI facial recognition system that scans and compares the faces of thousands of public individuals against a police database. The system is seen as disproportionate and poses risks to privacy and fundamental rights.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves an AI system explicitly described as a facial recognition program used by police for suspect identification. The system's use has directly led to privacy harms and potential violations of fundamental rights, as it processes images of thousands of uninvolved individuals without proportional justification. The data protection authority's criticism and imposed fines indicate recognized harm. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to violations of rights and privacy harms. The article also discusses broader data protection issues related to AI, reinforcing the assessment of realized harm rather than mere potential risk.[AI generated]
AI principles
Privacy & data governanceRespect of human rightsTransparency & explainabilityAccountabilityDemocracy & human autonomyFairness

Industries
Government, security, and defence

Affected stakeholders
General public

Harm types
Human or fundamental rightsPublic interestPsychological

Severity
AI incident

Business function:
Compliance and justice

AI system task:
Recognition/object detection


Articles about this incident or hazard

Thumbnail Image

Kritik an Polizei-System zur Gesichtserkennung

2025-05-12
GMX
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as a facial recognition program used by police for suspect identification. The system's use has directly led to privacy harms and potential violations of fundamental rights, as it processes images of thousands of uninvolved individuals without proportional justification. The data protection authority's criticism and imposed fines indicate recognized harm. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to violations of rights and privacy harms. The article also discusses broader data protection issues related to AI, reinforcing the assessment of realized harm rather than mere potential risk.
Thumbnail Image

Kritik an Polizei-System zur Gesichtserkennung - WELT

2025-05-12
DIE WELT
Why's our monitor labelling this an incident or hazard?
The article mentions AI use in police facial recognition and other data processing contexts, with concerns about data misuse and privacy violations. However, it does not report a specific AI Incident where AI use directly or indirectly caused harm such as injury, rights violations, or property/community harm. Instead, it reports on complaints, regulatory actions, and planned oversight, which are responses to potential or ongoing issues. Therefore, the article fits best as Complementary Information, providing context and updates on societal and governance responses to AI-related privacy and data protection issues, rather than describing a new AI Incident or AI Hazard.
Thumbnail Image

Datenschutzaufsicht: Kritik an Polizei-System zur Gesichtserkennung

2025-05-12
Der Tagesspiegel
Why's our monitor labelling this an incident or hazard?
The police's facial recognition system is an AI system that processes images in real time to identify individuals. Its use has directly led to privacy violations by scanning thousands of uninvolved people without proportional justification, which is a breach of fundamental rights and data protection laws. The article describes realized harm through data misuse and legal complaints, fulfilling the criteria for an AI Incident. The involvement of the AI system in the development and use phases, and the resulting harm to individuals' rights, supports this classification.
Thumbnail Image

Datenschützer kritisieren Polizei-System zur Gesichtserkennung

2025-05-12
rbb24.de
Why's our monitor labelling this an incident or hazard?
The police system uses AI-based facial recognition to process and compare faces from public spaces against a database, which involves AI system use. The criticism by data protection authorities highlights potential violations of privacy rights and proportionality, which are human rights concerns. Since the system has been used in actual investigations and involves processing data of uninvolved people, this constitutes an AI Incident due to indirect violation of rights and potential harm to individuals' privacy.