Turkey to Equip Police with AI-Powered Body Cameras Featuring Facial Recognition

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Turkey's Interior Minister announced that, starting in 2025, all police officers will wear body cameras equipped with AI-based facial recognition. The system enables real-time monitoring and identification of individuals, raising concerns about potential privacy violations and misuse, though no harm has yet occurred.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions the use of AI technology (face recognition) integrated into police body cameras, which will be used to monitor and identify suspects in real time. This involves AI system development and intended use. However, since the deployment is planned for the future and no actual harm or incident has occurred yet, the event represents a plausible risk of harm, such as privacy violations or misuse, but no direct or indirect harm has materialized. Therefore, it qualifies as an AI Hazard rather than an AI Incident or Complementary Information.[AI generated]
AI principles
Privacy & data governanceRespect of human rightsTransparency & explainabilityAccountabilityDemocracy & human autonomyRobustness & digital securityFairness

Industries
Government, security, and defenceDigital security

Affected stakeholders
General public

Harm types
Human or fundamental rightsPsychologicalPublic interest

Severity
AI hazard

Business function:
Compliance and justice

AI system task:
Recognition/object detection


Articles about this incident or hazard

Thumbnail Image

İçişleri Bakanı Ali Yerlikaya, polis memurlarına yaka kamerası takılacağını açıkladı - Son Dakika

2024-02-29
Son Dakika
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI technology (face recognition) integrated into police body cameras, which will be used to monitor and identify suspects in real time. This involves AI system development and intended use. However, since the deployment is planned for the future and no actual harm or incident has occurred yet, the event represents a plausible risk of harm, such as privacy violations or misuse, but no direct or indirect harm has materialized. Therefore, it qualifies as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Emniyet'te yeni dönem! Polislere yaka kamerası takılacak

2024-02-28
İnternethaber
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of a facial recognition system integrated into police body cameras, which is an AI system. The deployment and use of this AI system in policing could lead to violations of human rights, particularly privacy rights, and other harms related to surveillance and potential misuse. However, the article does not report any realized harm or incident resulting from this deployment yet; it only announces the plan and intended use. Therefore, this event represents a plausible future risk of harm due to AI use in surveillance and policing, qualifying it as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Polislere yaka kamerası mı takılacak? Polislere kamera takılacak mı?

2024-02-29
Vatan
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the integration of AI-based facial recognition technology into police body cameras, which will be used to monitor and identify individuals during police interactions. Although no harm has yet occurred, the deployment of such AI systems in policing could plausibly lead to violations of fundamental rights and privacy, constituting an AI Hazard. Since the event concerns future deployment and potential risks rather than an actual incident of harm, it is best classified as an AI Hazard.
Thumbnail Image

Polislere "yüz tanıma sistemi" içeren yaka kamerası

2024-02-29
Bianet - Bagimsiz Iletisim Agi
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the integration of a facial recognition system, which is an AI system, into police body cameras. The system will be used to identify individuals during police interactions and enable immediate intervention if a wanted person is detected. While the deployment is planned for the future (2025), the description implies a credible risk of potential harms such as violations of privacy and human rights due to pervasive surveillance and real-time identification without explicit mention of safeguards. Since no actual harm has yet occurred but the system's use could plausibly lead to violations of rights or other harms, this qualifies as an AI Hazard rather than an Incident. The article does not report any realized harm or incident at this stage, nor does it focus on responses or updates to prior incidents, so it is not Complementary Information.
Thumbnail Image

Bakan Yerlikaya açıkladı: Polislere 'yüz tanıma sistemi' içeren yaka kamerası takılacak

2024-02-29
Sputnik Türkiye
Why's our monitor labelling this an incident or hazard?
The facial recognition system is an AI system used for identifying individuals from video footage. The deployment of such technology by police raises plausible risks of harm, including potential violations of privacy and human rights, misuse, or abuse of surveillance capabilities. Although no harm has yet occurred, the description indicates a credible risk that the system's use could lead to incidents involving rights violations or other harms. Therefore, this event qualifies as an AI Hazard due to the plausible future harm from the use of AI-powered facial recognition in policing.