China Launches AI-Enabled National Digital ID and Intensifies Online Surveillance, Sparking Rights Concerns and Public Backlash

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

China has launched a national digital identity system integrating AI and big data for real-name online registration and surveillance, raising widespread concerns over censorship, privacy violations, and repression. Simultaneously, authorities intensified AI-driven campaigns to police online content affecting minors, addressing ongoing harms such as exploitation and mental health risks.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves the deployment of a national digital identity authentication system that integrates with AI and big data technologies for monitoring and controlling online behavior. While the article does not report direct harm yet, it highlights credible concerns about the system's potential to enable large-scale surveillance, censorship, and repression, which are violations of human rights and harm to communities. The AI system's use in predictive policing and behavior scoring is a plausible future risk. Hence, this is an AI Hazard rather than an AI Incident, as harm is not yet realized but is foreseeable and credible.[AI generated]
AI principles
Privacy & data governanceRespect of human rightsTransparency & explainabilityDemocracy & human autonomyAccountabilitySafety

Industries
Government, security, and defenceDigital securityMedia, social platforms, and marketingIT infrastructure and hosting

Affected stakeholders
General publicChildren

Harm types
Human or fundamental rightsPublic interestPsychological

Severity
AI hazard

Business function:
Compliance and justiceICT management and information securityMonitoring and quality control

AI system task:
Recognition/object detectionEvent/anomaly detectionOrganisation/recommendersForecasting/prediction


Articles about this incident or hazard

Thumbnail Image

抵制「網證網號」制度 青島出現反對傳單 | 上線 | 街頭 | 大紀元

2025-07-15
The Epoch Times
Why's our monitor labelling this an incident or hazard?
The event involves the deployment of a national digital identity authentication system that integrates with AI and big data technologies for monitoring and controlling online behavior. While the article does not report direct harm yet, it highlights credible concerns about the system's potential to enable large-scale surveillance, censorship, and repression, which are violations of human rights and harm to communities. The AI system's use in predictive policing and behavior scoring is a plausible future risk. Hence, this is an AI Hazard rather than an AI Incident, as harm is not yet realized but is foreseeable and credible.
Thumbnail Image

【熱點互動】前軟件設計員專訪:我見證的中共數字暴政 | 中國 | 數據造假 | 數據監控 | 新唐人电视台

2025-07-15
www.ntdtv.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems for mass digital surveillance and control, which directly leads to violations of human rights and fundamental freedoms. The deployment of a centralized digital identity system with real-name registration and monitoring capabilities constitutes an AI Incident because it has already been implemented and is actively used to restrict and control citizens' online activities, causing harm to communities and individuals' rights. The involvement of AI in data monitoring and control is reasonably inferred given the scale and nature of the system described.
Thumbnail Image

抵制中共「網證網號」制度 青島街頭現反對傳單 | 管控系統 | 網絡賬號 | 新唐人电视台

2025-07-15
www.ntdtv.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI-related digital identity and control system that manages online accounts and enforces real-name verification, which can be reasonably inferred to involve AI or algorithmic systems for monitoring, managing, and enforcing online behavior at scale. The system's use could plausibly lead to violations of human rights, specifically freedom of expression and access to information, by enabling efficient censorship and suppression of dissenting voices. Since the system is newly launched and still in testing, with no concrete incidents of harm reported yet, the event fits the definition of an AI Hazard rather than an AI Incident. The article's focus is on the potential risks and societal concerns rather than on actual harm or incidents caused by the system so far.
Thumbnail Image

內地整治暑期未成年人網絡環境 防隔空猥褻等犯罪

2025-07-16
on.cc東網
Why's our monitor labelling this an incident or hazard?
The event involves the use and regulation of AI systems that manage or influence online content and interactions affecting minors. The harms addressed include violations of minors' rights and harm to their health and well-being due to exposure to harmful online content and behaviors. Although the article does not specify a particular AI system malfunction or incident, the campaign targets harms directly linked to AI-enabled online environments and content dissemination. Since the harms are ongoing and the campaign is a response to these harms, this qualifies as an AI Incident involving violations of rights and harm to communities (minors).
Thumbnail Image

陸民抵制網證網號 青島街頭現反對傳單| 台灣大紀元

2025-07-15
大紀元時報 - 台灣(The Epoch Times - Taiwan)
Why's our monitor labelling this an incident or hazard?
The event involves the deployment of an AI-enabled national digital identity system that integrates user data across platforms, enabling enhanced surveillance and control. The article highlights plausible future harms such as suppression of dissent, censorship, and privacy violations, which constitute violations of human rights. Since the system is newly launched and the harms are anticipated but not yet concretely realized or documented as incidents, this qualifies as an AI Hazard. The article does not report a specific AI Incident with realized harm but warns of credible risks associated with the system's use.
Thumbnail Image

中央網信辦專項整治2025年暑期未成年人網絡環境 - 香港文匯網

2025-07-15
香港文匯網
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI functions in the context of their improper use affecting minors, which could plausibly lead to harm such as mental health issues or exploitation. However, the article does not report any specific realized harm or incident caused by AI systems, but rather outlines a regulatory and enforcement campaign to prevent such harms. Therefore, this is a credible AI Hazard, as the development, use, or misuse of AI systems in this context could plausibly lead to incidents harming minors' health or rights in the future.
Thumbnail Image

網信部門公布典型案例 嚴打短片平台亂象

2025-07-17
on.cc東網
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to impersonate others and spread false information, which constitutes a violation of rights and harms communities by disrupting public order and spreading misinformation. Since these harms are occurring and enforcement actions are underway, this qualifies as an AI Incident. The AI system's misuse has directly led to harm in the form of misinformation and identity fraud on online platforms, fulfilling the criteria for an AI Incident under the OECD framework.