China's AI-Powered Mass Surveillance Leads to Human Rights Violations

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

China has deployed over 500 million AI-enabled surveillance cameras, using facial recognition and predictive analytics to monitor and suppress its population, including minorities and dissidents. This mass surveillance, described as creating an 'AI totalitarian state,' has resulted in widespread violations of human rights and fundamental freedoms.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves AI systems explicitly through the use of AI-powered facial recognition and integrated surveillance platforms. The use of these AI systems directly leads to violations of human rights and breaches of fundamental rights, as the surveillance is employed to monitor and suppress dissent and human rights activities. Therefore, this constitutes an AI Incident due to realized harm to human rights and freedoms caused by AI-enabled surveillance.[AI generated]
AI principles
AccountabilityFairnessHuman wellbeingPrivacy & data governanceRespect of human rightsTransparency & explainabilityDemocracy & human autonomy

Industries
Government, security, and defenceDigital securityIT infrastructure and hosting

Affected stakeholders
General publicCivil society

Harm types
Human or fundamental rightsPsychologicalPublic interest

Severity
AI incident

Business function:
Monitoring and quality controlCompliance and justice

AI system task:
Recognition/object detectionEvent/anomaly detectionForecasting/prediction


Articles about this incident or hazard

Thumbnail Image

監控攝影機全球最多 中國打造人工智慧極權王國 - 國際 - 自由時報電子報

2022-08-28
Liberty Times Net
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly through the use of AI-powered facial recognition and integrated surveillance platforms. The use of these AI systems directly leads to violations of human rights and breaches of fundamental rights, as the surveillance is employed to monitor and suppress dissent and human rights activities. Therefore, this constitutes an AI Incident due to realized harm to human rights and freedoms caused by AI-enabled surveillance.
Thumbnail Image

中國監控攝影機全球第一 專家:AI極權王國 | 聯合新聞網

2022-08-28
UDN
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems such as facial recognition and predictive AI used in China's vast surveillance network, which monitors and suppresses citizens, including minority groups. This constitutes a violation of human rights and fundamental freedoms, a direct harm caused by the use of AI. The involvement of AI in enabling this mass surveillance and repression is clear and central to the harm described. Hence, the event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

中國監控攝影機全球第一 專家:AI極權王國 | 國際要聞 | 全球 | NOWnews今日新聞

2022-08-28
NOWnews 今日新聞
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI systems (facial recognition, predictive AI surveillance) deployed at massive scale for monitoring and controlling people, which directly leads to violations of human rights and harm to communities through repression and loss of freedoms. The AI system's use is central to the harm described. The military AI applications mentioned also indicate potential further harm. Since harm is occurring and AI is pivotal, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

中國人頭頂上的監控攝像機 老大哥在盯著你

2022-08-28
美國之音
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI systems (facial recognition, integrated surveillance platforms) used by the Chinese government to monitor and suppress citizens, including minority groups, which constitutes a violation of human rights. The harms are ongoing and realized, not merely potential. The AI systems' development and use are central to these harms. Hence, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to significant human rights violations.
Thumbnail Image

中國監控攝影機全球第一 專家:AI極權王國 | 科技 | 中央社 CNA

2022-08-28
Central News Agency
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI systems (facial recognition, predictive AI surveillance) deployed at massive scale for monitoring and controlling populations, which directly results in violations of human rights and suppression of freedoms. This fits the definition of an AI Incident because the AI system's use has directly led to harm (violation of rights and harm to communities). The military AI applications mentioned also indicate potential further harm but the primary focus is on realized harms from surveillance. Hence, the classification is AI Incident.