
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
Hong Kong authorities plan to deploy AI-driven facial recognition technology in public surveillance cameras, prioritizing high-traffic commercial areas under the SmartView program. The rollout, delayed by legal and technical issues, has raised concerns over potential mass surveillance and privacy violations, though no harm has yet occurred.[AI generated]
Why's our monitor labelling this an incident or hazard?
The event involves the planned use of AI systems (facial recognition integrated with video analytics) in public surveillance, which could plausibly lead to harms such as violations of human rights, including privacy and potential misuse for mass surveillance. Although the system is not yet active, the article indicates a credible and imminent risk of harm due to the scale and nature of the AI deployment. Therefore, this constitutes an AI Hazard rather than an Incident, as no realized harm is reported yet but plausible future harm is credible.[AI generated]