Hong Kong Police Expand AI Surveillance with Facial and License Plate Recognition

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Hong Kong police are rapidly expanding the 'Sharp Eye' surveillance system, integrating AI-powered facial and license plate recognition across 15,000 cameras and patrol cars by 2027. The system has already aided in solving 351 cases and arresting 628 suspects, raising privacy concerns despite legal compliance measures.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves the use of AI systems (automatic license plate recognition and planned facial recognition) in active law enforcement operations that have already led to the identification and arrest of suspects, thus directly contributing to public safety and crime resolution. This constitutes the use of AI systems leading to realized harm prevention and law enforcement benefits, which falls under the scope of AI Incident as it involves direct use of AI systems impacting human rights and public safety. Although privacy concerns are noted, the article does not report violations but rather discusses safeguards and compliance efforts. Therefore, this is an AI Incident due to the active deployment and use of AI systems in policing with direct societal impact.[AI generated]
AI principles
Privacy & data governanceRespect of human rightsTransparency & explainabilityAccountabilityDemocracy & human autonomy

Industries
Government, security, and defence

Affected stakeholders
General public

Harm types
Human or fundamental rights

Severity
AI incident

Business function:
Compliance and justice

AI system task:
Recognition/object detection


Articles about this incident or hazard

Thumbnail Image

警60巡邏車配備車牌識別鏡頭7月底出更 研納人臉識別被通緝人士

2025-07-24
香港01
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (automatic license plate recognition and planned facial recognition) in active law enforcement operations that have already led to the identification and arrest of suspects, thus directly contributing to public safety and crime resolution. This constitutes the use of AI systems leading to realized harm prevention and law enforcement benefits, which falls under the scope of AI Incident as it involves direct use of AI systems impacting human rights and public safety. Although privacy concerns are noted, the article does not report violations but rather discusses safeguards and compliance efforts. Therefore, this is an AI Incident due to the active deployment and use of AI systems in policing with direct societal impact.
Thumbnail Image

工商廈商場天眼研駁警系統 初步料採自願模式 人面識別最快年底試行 - 20250725 - 要聞

2025-07-24
明報新聞網 - 每日明報 daily news
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of facial recognition technology integrated into a police surveillance system, which qualifies as an AI system. The system is intended for use in public surveillance and law enforcement, which implicates potential violations of privacy and human rights if misused or inadequately regulated. Although the system has helped solve crimes using CCTV footage, the facial recognition component is still in trial and not yet reported to have caused harm. Given the plausible risk of privacy violations and human rights breaches from such AI surveillance systems, this event fits the definition of an AI Hazard rather than an AI Incident. It is not merely complementary information because the main focus is on the planned deployment and potential implications of the AI system, not just an update on past incidents.
Thumbnail Image

裝天眼警車月底啟用 識別車牌分析資料 - 20250725 - 要聞

2025-07-24
明報新聞網 - 每日明報 daily news
Why's our monitor labelling this an incident or hazard?
The police cars' CCTV systems with automatic license plate recognition involve AI technology for real-time data analysis and identification. The use of this AI system directly supports law enforcement activities by enabling identification and tracking of vehicles related to crimes. While the article does not report any harm or incident resulting from this deployment, the use of AI for surveillance and data analysis has implications for privacy and potential misuse. However, since no harm or violation is reported or implied, and the article focuses on the deployment and intended use, this qualifies as Complementary Information about AI system use and its societal implications rather than an Incident or Hazard.
Thumbnail Image

警方「銳眼」計劃各區已裝逾3千鏡頭 成功協助偵破351宗案件 - RTHK

2025-07-24
news.rthk.hk
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (facial recognition and automatic license plate recognition) in active law enforcement operations that have directly led to the identification and apprehension of suspects, thus contributing to the resolution of criminal cases. This constitutes the use of AI systems leading to realized harm prevention and law enforcement benefits. However, the article does not report any harm caused by the AI system itself, such as privacy violations or misuse, nor does it mention any incidents of malfunction or abuse. Since the AI system's use is linked to direct societal impact (crime solving), but no harm or violation is reported, this is best classified as Complementary Information providing context on AI deployment and its societal implications rather than an AI Incident or Hazard.
Thumbnail Image

1.5萬個攝像頭覆蓋全港!香港警方「銳眼」計劃將升級AI人臉識別 - 香港文匯網

2025-07-25
香港文匯網
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI systems (facial recognition, license plate recognition, AI video analysis) in the police surveillance project. The AI system is in active use and has contributed to crime solving, but no direct or indirect harm (such as privacy violations, wrongful arrests, or misuse) is reported. The article highlights legal compliance and privacy protections, indicating governance efforts. There is no indication of plausible future harm beyond general concerns, nor any incident of harm occurring. Thus, the event does not meet the criteria for AI Incident or AI Hazard but fits the definition of Complementary Information as it updates on AI deployment, governance, and societal implications.
Thumbnail Image

科技滅罪保安全 私隱保障同兼顧 - 香港文匯網

2025-07-25
香港文匯網
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (facial recognition, license plate recognition, AI-powered video analysis) in a real-world law enforcement context. The AI system's use has directly led to the resolution of numerous criminal cases, which is a clear positive impact on public safety and harm prevention. The article explicitly states the AI system's role in assisting police investigations and crime prevention, fulfilling the criteria for an AI Incident as the AI system's use has directly led to harm reduction (protecting health and safety of people). Although privacy concerns are addressed, no violations or harms are reported, so the event is not a hazard or complementary information but an incident reflecting realized impact of AI use in public safety.
Thumbnail Image

天眼執法屢建奇功 香港警方料年底引人臉識別方便追兇

2025-07-25
hkcna.hk
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (facial recognition, license plate recognition, AI video analysis) in law enforcement, which have directly contributed to solving crimes and apprehending suspects, thus causing realized harm in terms of public safety and crime prevention. The AI system's deployment and use have a direct link to harm reduction and law enforcement outcomes. Although privacy concerns are addressed, the primary focus is on the AI system's role in crime detection and prevention, which fits the definition of an AI Incident as it involves harm prevention and public safety enhancement through AI use. Therefore, the event is classified as an AI Incident.
Thumbnail Image

天眼執法屢建奇功 香港警方料年底引人臉識別方便追兇

2025-07-25
hkcna.hk
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (facial recognition, AI-based video analysis) in active law enforcement surveillance, which has directly led to arrests and crime resolution, thus causing impacts on individuals' privacy and potentially human rights. The article explicitly mentions the use of AI for facial recognition and automatic license plate recognition, and the system's role in solving crimes. Although the police emphasize compliance with privacy laws, the deployment of such AI surveillance systems inherently involves risks of rights violations and harm to privacy, which are recognized harms under the framework. Therefore, this is an AI Incident rather than a hazard or complementary information, as the AI system's use has already led to significant effects on people and communities.
Thumbnail Image

港警閉路電視系統最快年底加入人臉識別  11:02

2025-07-25
hkcna.hk
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (facial recognition technology) in the police's CCTV network. The use of facial recognition for surveillance and crime prevention can lead to violations of human rights, particularly privacy rights, and potential misuse or overreach. However, the article does not report any realized harm or incidents resulting from this deployment yet; it only discusses plans and ongoing integration. Therefore, this constitutes a plausible future risk of harm due to AI use in surveillance, fitting the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

執法部門天眼最快年底設人臉識別 研駁工商大廈鏡頭至系統

2025-07-25
on.cc東網
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (facial recognition and automatic license plate recognition) in law enforcement, which can impact privacy and human rights. However, no direct or indirect harm has been reported so far. The article focuses on the planned implementation and integration of these AI systems, with acknowledgment of privacy complexities and voluntary participation for private CCTV integration. Therefore, this is a plausible future risk scenario rather than a realized harm. Hence, it qualifies as an AI Hazard, as the deployment could plausibly lead to violations of rights or other harms in the future.
Thumbnail Image

16:19:43巡邏警車安裝流動閉路電視 提升警務工作效能

2025-07-25
hkcd.com
Why's our monitor labelling this an incident or hazard?
The event involves the use and planned deployment of AI systems (automatic license plate recognition and facial recognition) in law enforcement. However, the article does not report any realized harm or incident resulting from these AI systems. Instead, it focuses on the operational benefits and future plans, including privacy protections. Therefore, this event represents a plausible future risk scenario (AI Hazard) due to the potential for privacy violations or misuse of facial recognition technology, but no actual harm has been reported yet. Hence, it is classified as an AI Hazard rather than an AI Incident or Complementary Information.