Hong Kong's AI Surveillance Expansion Raises Human Rights Concerns

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Hong Kong authorities plan to install thousands of AI-powered surveillance cameras, including facial recognition technology, to combat crime. Critics warn this expansion could erode privacy and civil liberties, drawing comparisons to China's authoritarian surveillance practices and raising concerns about potential human rights violations.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions the use and planned expansion of AI systems such as facial recognition integrated into surveillance cameras. Although the current system contributes to public safety, the main concern is the plausible future erosion of civil liberties and privacy violations due to intrusive AI surveillance. No direct or indirect harm has been reported as having occurred yet, but the credible risk of such harm is clearly articulated. Hence, the event fits the definition of an AI Hazard, reflecting a credible potential for harm stemming from AI system use in surveillance.[AI generated]
AI principles
AccountabilityFairnessPrivacy & data governanceRespect of human rightsRobustness & digital securityTransparency & explainabilityDemocracy & human autonomy

Industries
Government, security, and defenceDigital security

Affected stakeholders
General public

Harm types
Human or fundamental rightsPsychologicalPublic interest

Severity
AI hazard

Business function:
Compliance and justiceMonitoring and quality control

AI system task:
Recognition/object detectionEvent/anomaly detection


Articles about this incident or hazard

Thumbnail Image

Orașul devenit un "Big Brother": Oamenii sunt filmați și monitorizați pas cu pas, făcându-l unul dintre cele mai sigure din lume, dar ascunde un pericol major

2024-10-06
PLAYTECH.ro
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use and planned expansion of AI systems such as facial recognition integrated into surveillance cameras. Although the current system contributes to public safety, the main concern is the plausible future erosion of civil liberties and privacy violations due to intrusive AI surveillance. No direct or indirect harm has been reported as having occurred yet, but the credible risk of such harm is clearly articulated. Hence, the event fits the definition of an AI Hazard, reflecting a credible potential for harm stemming from AI system use in surveillance.
Thumbnail Image

Orașul care s-a transformat într-un adevărat Big Brother. Oamenii sunt filmați și monitorizați pas cu pas

2024-10-07
Fanatik.ro
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-powered facial recognition technology being considered for use in Hong Kong's surveillance system. The system's development and use could plausibly lead to violations of privacy and human rights, which are harms under the AI Incident definition. However, no actual harm or incident is reported as having occurred yet; the concerns are about future risks and the need for regulation. Hence, this fits the definition of an AI Hazard, where the AI system's use could plausibly lead to harm but has not yet done so.
Thumbnail Image

Orașul din lume în care oamenii sunt filmați și supravegheați non-stop. De ce au luat autoritățile această decizie

2024-10-07
Gândul
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (facial recognition and AI in surveillance) actively deployed by authorities to monitor citizens. While the article does not report a specific incident of harm occurring, the pervasive surveillance and use of AI for facial recognition pose significant risks of violations of human rights, particularly privacy and freedom of expression. Given the scale and nature of the system, there is a credible risk that such AI use could lead to harms such as rights violations or social control abuses. Therefore, this situation qualifies as an AI Hazard, as the AI system's use could plausibly lead to an AI Incident involving human rights violations, even if no specific harm is reported yet.
Thumbnail Image

Hong Kong vrea să instaleze mii de camere de supraveghere. Criticii spun că este o dovadă în plus că orașul se apropie de China

2024-10-06
DCnews
Why's our monitor labelling this an incident or hazard?
The event involves the planned use of AI systems (facial recognition and AI surveillance) by Hong Kong police, which could plausibly lead to violations of human rights and repression (harm category c). Since no actual harm or incident has been reported yet, but credible concerns about future misuse exist, this qualifies as an AI Hazard. The article focuses on the potential for harm rather than describing a realized AI Incident or a response to one, so it is not Complementary Information. It is clearly related to AI systems and their societal impact, so it is not Unrelated.