
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
Turkey's Interior Minister announced that, starting in 2025, all police officers will wear body cameras equipped with AI-based facial recognition. The system enables real-time monitoring and identification of individuals, raising concerns about potential privacy violations and misuse, though no harm has yet occurred.[AI generated]
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI technology (face recognition) integrated into police body cameras, which will be used to monitor and identify suspects in real time. This involves AI system development and intended use. However, since the deployment is planned for the future and no actual harm or incident has occurred yet, the event represents a plausible risk of harm, such as privacy violations or misuse, but no direct or indirect harm has materialized. Therefore, it qualifies as an AI Hazard rather than an AI Incident or Complementary Information.[AI generated]