China's AI Surveillance of Uyghurs Raises Human Rights Concerns

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Chinese authorities are reportedly using AI-powered cameras to monitor Uyghurs in cities like Shanghai and Chengdu, creating detailed movement profiles. German media outlet Table.Media revealed public tenders for software targeting Uyghur identification, highlighting ongoing facial recognition efforts over six years, raising significant human rights concerns.[AI generated]

Why's our monitor labelling this an incident or hazard?

This is an active deployment of an AI system (facial recognition) leading directly to systematic discrimination and surveillance of a protected ethnic minority, constituting a violation of human rights. The AI’s use in identifying and tracking Uyghurs in public is causing tangible harm and rights abuses.[AI generated]
AI principles
Respect of human rightsPrivacy & data governanceTransparency & explainabilityFairnessDemocracy & human autonomyAccountabilityHuman wellbeing

Industries
Government, security, and defence

Affected stakeholders
Other

Harm types
Human or fundamental rightsPsychologicalPublic interest

Severity
AI incident

Business function:
Compliance and justiceMonitoring and quality control

AI system task:
Recognition/object detectionEvent/anomaly detection


Articles about this incident or hazard

Thumbnail Image

China uses AI cameras to monitor Uyghurs, reports German media

2024-06-16
ThePrint
Why's our monitor labelling this an incident or hazard?
This is an active deployment of an AI system (facial recognition) leading directly to systematic discrimination and surveillance of a protected ethnic minority, constituting a violation of human rights. The AI’s use in identifying and tracking Uyghurs in public is causing tangible harm and rights abuses.
Thumbnail Image

China uses AI cameras to monitor Uyghurs, reports German media | International

2024-06-16
Devdiscourse
Why's our monitor labelling this an incident or hazard?
The article describes the active use of AI-based facial recognition software by police to target Uyghurs in public spaces, generate movement profiles, and trigger alerts—constituting direct human rights violations and discriminatory surveillance of a protected ethnic group. This is a realized harm caused by an AI system’s use, fitting the definition of an AI Incident.
Thumbnail Image

World News | China Uses AI Cameras to Monitor Uyghurs, Reports German Media | LatestLY

2024-06-16
LatestLY
Why's our monitor labelling this an incident or hazard?
The article describes ongoing, active deployment of AI systems—specifically facial recognition—with the explicit purpose of identifying and tracking Uyghurs, a Muslim minority, across public spaces and alerting authorities. This constitutes direct harm through violations of fundamental human rights, including discriminatory surveillance and profiling. Therefore, this is an AI Incident.
Thumbnail Image

China uses AI cameras to monitor Uyghurs, reports German media

2024-06-16
Asian News International (ANI)
Why's our monitor labelling this an incident or hazard?
The reported tenders describe deployed AI systems designed to identify and monitor Uyghur individuals nationwide, including real-time alerts and linkage to police databases. This constitutes an active use of AI causing direct harm through racial profiling, privacy violations, and suppression of a minority group’s rights.