Hong Kong Plans Deployment of AI-Powered Facial Recognition Surveillance

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Hong Kong authorities plan to deploy AI-driven facial recognition technology in public surveillance cameras, prioritizing high-traffic commercial areas under the SmartView program. The rollout, delayed by legal and technical issues, has raised concerns over potential mass surveillance and privacy violations, though no harm has yet occurred.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves the planned use of AI systems (facial recognition integrated with video analytics) in public surveillance, which could plausibly lead to harms such as violations of human rights, including privacy and potential misuse for mass surveillance. Although the system is not yet active, the article indicates a credible and imminent risk of harm due to the scale and nature of the AI deployment. Therefore, this constitutes an AI Hazard rather than an Incident, as no realized harm is reported yet but plausible future harm is credible.[AI generated]
AI principles
Privacy & data governanceRespect of human rights

Industries
Government, security, and defenceDigital security

Affected stakeholders
General public

Harm types
Human or fundamental rights

Severity
AI hazard

AI system task:
Recognition/object detection


Articles about this incident or hazard

Thumbnail Image

Hong Kong incrementará la vigilancia masiva con la instalación de cámaras de seguridad con reconocimiento facial en las calles

2026-02-15
infobae
Why's our monitor labelling this an incident or hazard?
The event involves the planned use of AI systems (facial recognition integrated with video analytics) in public surveillance, which could plausibly lead to harms such as violations of human rights, including privacy and potential misuse for mass surveillance. Although the system is not yet active, the article indicates a credible and imminent risk of harm due to the scale and nature of the AI deployment. Therefore, this constitutes an AI Hazard rather than an Incident, as no realized harm is reported yet but plausible future harm is credible.
Thumbnail Image

Hong Kong instalará cámaras de seguridad con reconocimiento facial en sus calles este año

2026-02-15
LaVanguardia
Why's our monitor labelling this an incident or hazard?
The event involves the planned use of an AI system (facial recognition and AI-driven video analysis) in public surveillance, which is explicitly mentioned. The system is not yet active, so no direct harm has occurred, but the article clearly states the intention to deploy it soon and discusses the potential for expanded mass surveillance. This raises plausible future harms related to human rights violations and societal impacts. Since no actual harm has been reported yet, it does not qualify as an AI Incident. The article is not merely complementary information because it focuses on the planned deployment and its implications rather than updates or responses to past incidents. Therefore, the classification as an AI Hazard is appropriate.
Thumbnail Image

Hong Kong instalará cámaras de seguridad con reconocimiento facial en sus calles

2026-02-15
LaPatilla.com
Why's our monitor labelling this an incident or hazard?
The event involves the planned use of AI systems (facial recognition integrated with video analytics) in public surveillance. Although no harm has yet occurred, the deployment of mass AI surveillance systems with facial recognition capabilities plausibly leads to violations of human rights, such as privacy rights and potential misuse for mass surveillance. The article focuses on the intended use and expansion of this AI system, with legal and societal concerns noted but no incident of harm reported. Therefore, this qualifies as an AI Hazard, reflecting a credible risk of future harm from the AI system's use.
Thumbnail Image

Hong Kong incrementará la vigilancia masiva con la instalación de cámaras de seguridad con reconocimiento facial en las calles - La Banda Diario

2026-02-15
La Banda Diario
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI-driven facial recognition technology in public surveillance, which qualifies as an AI system. The deployment is planned but delayed due to legal and technical challenges, so no direct harm has yet occurred. However, the expansion of mass surveillance with AI-powered facial recognition plausibly leads to violations of human rights and privacy, fitting the definition of an AI Hazard. There is no indication of actual harm or incidents at this stage, so it cannot be classified as an AI Incident. The article is not merely general AI news or a complementary update but focuses on the credible future risk posed by this AI system's deployment.
Thumbnail Image

Hong Kong instalará cámaras de seguridad con reconocimiento facial en sus calles este año

2026-02-16
Noticias Venevisión
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (facial recognition technology) that is planned for use but not yet deployed. Since the system is not currently causing harm but could plausibly lead to harms such as violations of privacy and human rights once operational, this constitutes an AI Hazard. There is no indication that harm has already occurred, so it is not an AI Incident. The focus is on potential future risks rather than a response or update to a past incident, so it is not Complementary Information.