China's AI-Powered Network ID System Raises Privacy and Human Rights Concerns

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

China's Ministry of Public Security has deployed an AI-driven 'network ID' system in Fujian and Guangdong, requiring biometric data for online authentication. Critics warn it enables mass surveillance, centralized sensitive data, and potential government control over internet access, raising significant privacy, security, and human rights concerns.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves the use of an AI system (facial recognition and biometric verification) for online identity authentication. Although no direct harm has been reported yet, the system's deployment could plausibly lead to significant harms such as violations of privacy rights, suppression of freedom of expression, and social control, which fall under violations of human rights and harm to communities. Therefore, this qualifies as an AI Hazard due to the credible risk of future harm stemming from the AI system's use in surveillance and control.[AI generated]
AI principles
Privacy & data governanceRespect of human rightsDemocracy & human autonomy

Industries
Digital securityGovernment, security, and defence

Affected stakeholders
General public

Harm types
Human or fundamental rightsPublic interest

Severity
AI hazard

Business function:
ICT management and information security

AI system task:
Recognition/object detection


Articles about this incident or hazard

Thumbnail Image

中共推行网络身份证 能否上网当局说了算? - 大纪元

2020-11-27
The Epoch Times
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (biometric and identity verification technology) used by the Chinese government to control and monitor internet access. The system's use has directly led to harms such as privacy violations and potential censorship, infringing on human rights. The centralized collection of sensitive biometric data also raises ethical and security concerns. These harms are realized and ongoing, meeting the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

袁斌:中共推出"网络良民证"遭网友抨击

2020-11-28
www.ntdtv.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (facial recognition and biometric verification) for online identity authentication. Although no direct harm has been reported yet, the system's deployment could plausibly lead to significant harms such as violations of privacy rights, suppression of freedom of expression, and social control, which fall under violations of human rights and harm to communities. Therefore, this qualifies as an AI Hazard due to the credible risk of future harm stemming from the AI system's use in surveillance and control.
Thumbnail Image

防火墙监控还不够 公安部网络身份证在粤闽推行(图) - 时事追踪

2020-11-28
看中国
Why's our monitor labelling this an incident or hazard?
An AI system is clearly involved: the network ID system uses biometric recognition and identity verification technologies, which rely on AI for processing and authentication. The event stems from the system's development and use. While the article does not report direct realized harm, it highlights credible concerns about privacy breaches, data security risks, and potential government overreach leading to human rights violations. These concerns indicate plausible future harms linked to the AI system's deployment. Since no direct harm is confirmed but plausible harm is credible, the event fits the definition of an AI Hazard rather than an AI Incident. It is not merely complementary information because the focus is on the system's deployment and associated risks, not on responses or updates to past incidents. It is not unrelated because the AI system and its societal implications are central to the report.
Thumbnail Image

能否上网当局说了算? 公安部推网络身份证粤闽试行

2020-11-27
Radio Free Asia
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (biometric verification using facial recognition and fingerprint data) developed and deployed by the Chinese Public Security Bureau. The system's use in issuing a 'network ID' for online authentication directly involves AI in identity verification. Although no direct harm has been reported yet, the article highlights credible concerns about privacy violations, centralized sensitive data, and potential government control over internet access, which could plausibly lead to violations of human rights and freedoms. Hence, the event fits the definition of an AI Hazard, as the AI system's use could plausibly lead to an AI Incident in the future.