Turkey Plans AI-Based Biometric Tracking for Legal Supervision

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Turkey's Justice Ministry is preparing to implement the Biometric Signature and Tracking System (BİOSİS), using AI-driven biometric verification and GPS tracking to monitor 450,000 individuals under judicial supervision via smartphones. While aiming to increase efficiency, the system raises concerns about potential privacy violations and rights infringements.[AI generated]

Why's our monitor labelling this an incident or hazard?

The system clearly involves AI technologies (biometric recognition, GPS tracking, automated alerts) used for monitoring individuals. Since the system is still in the procurement phase and no harm or violation has been reported, the event does not qualify as an AI Incident. However, the deployment of such pervasive AI surveillance technology could plausibly lead to harms such as privacy violations, misuse, or rights infringements in the future. Therefore, it fits the definition of an AI Hazard, as the AI system's use could plausibly lead to incidents involving violations of rights or harm to communities. The article does not focus on responses, updates, or broader ecosystem context, so it is not Complementary Information.[AI generated]
AI principles
Privacy & data governanceRespect of human rights

Industries
Government, security, and defence

Affected stakeholders
General public

Harm types
Human or fundamental rights

Severity
AI hazard

Business function:
Compliance and justice

AI system task:
Recognition/object detectionEvent/anomaly detection


Articles about this incident or hazard

Thumbnail Image

Adli kontrolde yeni dönem: Karakolda imza dönemi bitiyor

2026-04-22
birgun.net
Why's our monitor labelling this an incident or hazard?
The system clearly involves AI technologies (biometric recognition, GPS tracking, automated alerts) used for monitoring individuals. Since the system is still in the procurement phase and no harm or violation has been reported, the event does not qualify as an AI Incident. However, the deployment of such pervasive AI surveillance technology could plausibly lead to harms such as privacy violations, misuse, or rights infringements in the future. Therefore, it fits the definition of an AI Hazard, as the AI system's use could plausibly lead to incidents involving violations of rights or harm to communities. The article does not focus on responses, updates, or broader ecosystem context, so it is not Complementary Information.
Thumbnail Image

Karakolda İmza Dönemi Bitiyor: 450 Bin Suçluya Cebinden Takip

2026-04-22
www.gercekgundem.com
Why's our monitor labelling this an incident or hazard?
The system described involves AI or AI-like biometric and tracking technologies used for monitoring individuals' compliance with legal obligations. However, the article does not report any actual harm or incident resulting from the system's use; it focuses on the planned implementation and expected benefits such as increased efficiency and reduced workload. There is no mention of realized injury, rights violations, or other harms. The system's deployment could plausibly lead to privacy concerns or rights violations in the future, but these are not explicitly stated or evidenced in the article. Therefore, this event is best classified as an AI Hazard, as the system's use could plausibly lead to harms related to surveillance and rights infringements, but no harm has yet occurred or been reported.
Thumbnail Image

Adli kontrolde yeni dönem: Karakolda imza dönemi bitiyor mu? - Evrensel

2026-04-22
Yeni Evrensel Gazetesi
Why's our monitor labelling this an incident or hazard?
The system involves AI-enabled biometric tracking, which qualifies as an AI system. Since the system is still in the procurement phase and no harm or misuse has been reported, the event does not describe an AI Incident. It also does not describe a plausible future harm scenario or credible risk of harm from the system's development or use at this stage, so it is not an AI Hazard. The article provides information about the system's development and deployment plans, which is complementary information enhancing understanding of AI applications in justice monitoring.
Thumbnail Image

Yandaş basından dikkat çeken iddia: Karakolda imza dönemi bitiyor mu?

2026-04-22
Samanyoluhaber.com
Why's our monitor labelling this an incident or hazard?
The described system clearly involves AI technologies (biometric recognition and automated GPS tracking with real-time alerts). The event concerns the development and planned deployment of this AI system for monitoring individuals under legal supervision. Although the article does not report any realized harm, the system's capabilities and intended use raise credible risks of privacy violations and potential human rights infringements. Hence, it fits the definition of an AI Hazard, as it plausibly could lead to an AI Incident in the future if harms materialize. There is no indication of current harm or incident, so it is not an AI Incident. It is not merely complementary information because the focus is on the new system's development and potential impact, not on responses or updates to past incidents.
Thumbnail Image

Karakolda imza yerine artık telefondan anlık takip!

2026-04-22
Gazete Pencere
Why's our monitor labelling this an incident or hazard?
The system clearly involves AI technologies such as biometric recognition and automated GPS-based monitoring with real-time alerts, fulfilling the AI system criterion. Since the system is not yet deployed and no harm has been reported, it does not qualify as an AI Incident. However, the deployment of such pervasive surveillance technology could plausibly lead to harms such as violations of privacy or rights, making it an AI Hazard. The article does not focus on responses or updates to prior incidents, so it is not Complementary Information. It is not unrelated as it involves AI systems and potential harm.