Chinese Authorities Use AI Facial Recognition Trained on Deceptively Collected Data for Mass Surveillance and Ethnic Profiling

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Chinese authorities collected facial data from rural villagers in Henan, offering small gifts like cooking oil, to train AI facial recognition systems. These systems enable mass surveillance and are used by police to monitor, classify, and discriminate against ethnic minorities, raising serious human rights concerns over privacy and ethnic profiling.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves the use of AI systems (facial recognition algorithms) developed and deployed for mass surveillance by the Chinese government. The collection of facial data to train these AI systems and their deployment for identifying, classifying, and tracking individuals constitutes a direct use of AI leading to violations of human rights and fundamental freedoms. The article details realized harms such as surveillance, classification, and tracking of citizens, including vulnerable groups, which fits the definition of an AI Incident under violations of human rights and breach of legal protections.[AI generated]
AI principles
Privacy & data governanceFairnessRespect of human rightsTransparency & explainabilityDemocracy & human autonomy

Industries
Government, security, and defence

Affected stakeholders
General public

Harm types
Human or fundamental rights

Severity
AI incident

AI system task:
Recognition/object detection


Articles about this incident or hazard

Thumbnail Image

驚悚!華郵記者揭發河南民眾賣臉 訓練AI辨識人臉 - 自由財經

2022-09-04
自由時報電子報
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (facial recognition algorithms) developed and deployed for mass surveillance by the Chinese government. The collection of facial data to train these AI systems and their deployment for identifying, classifying, and tracking individuals constitutes a direct use of AI leading to violations of human rights and fundamental freedoms. The article details realized harms such as surveillance, classification, and tracking of citizens, including vulnerable groups, which fits the definition of an AI Incident under violations of human rights and breach of legal protections.
Thumbnail Image

訓練AI人臉識別 中國河南用食油騙取人臉數據 - 國際 - 自由時報電子報

2022-09-05
Liberty Times Net
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI facial recognition systems trained on data collected under deceptive conditions. The AI is used by police to monitor and classify individuals, particularly ethnic minorities, leading to discriminatory surveillance and potential repression. This is a clear violation of human rights and fundamental rights, fulfilling the criteria for an AI Incident. The harm is realized and ongoing, not merely potential, as the AI system is actively used for oppressive surveillance.
Thumbnail Image

訓練AI人臉識別 中國河南用食油騙取人臉數據 | 大陸政經 | 兩岸 | 經濟日報

2022-09-05
Udnemoney聯合理財網
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems, specifically facial recognition AI, used for surveillance and ethnic profiling. The collection of facial data under deceptive circumstances for training AI models and the deployment of these systems to monitor and classify ethnic minorities as criminals directly leads to violations of human rights and breaches of legal protections. The harms are realized and ongoing, including discriminatory surveillance and potential repression, fitting the definition of an AI Incident due to direct harm caused by the AI system's use.
Thumbnail Image

數位監控?河南人「賣臉」換油 中國遭爆訓練人臉辨識AI | 兩岸傳真 | 全球 | NOWnews今日新聞

2022-09-05
NOWnews 今日新聞
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems for facial recognition, explicitly described as being trained and deployed for mass surveillance and social control. The collection of facial data under coercive or deceptive conditions and the use of AI to track, classify, and monitor citizens, including vulnerable groups, directly results in violations of fundamental rights and harms to communities. Therefore, this qualifies as an AI Incident due to realized harm stemming from the use of AI systems.
Thumbnail Image

中共發展臉部識別 用禮物收買民眾賣臉 - 大紀元

2022-09-05
The Epoch Times
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (facial recognition technology) used by the Chinese government to collect and monitor personal data of individuals, including vulnerable populations like elderly villagers and students. The collection is incentivized but lacks transparency and informed consent, constituting a violation of fundamental rights. The deployment of these systems in public institutions for surveillance and behavioral monitoring further supports the conclusion of realized harm. Therefore, this qualifies as an AI Incident due to direct involvement of AI systems causing violations of human rights and harm to communities through pervasive surveillance.
Thumbnail Image

華郵記者爆中國河南人搶賣臉換食用油 供AI辨識 | 國際 | Newtalk新聞

2022-09-05
新頭殼 Newtalk
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (facial recognition technology) in the development and deployment stages, with direct use leading to the collection and processing of sensitive personal data. The large-scale, possibly coercive or uninformed collection of biometric data constitutes a violation of fundamental rights, including privacy and potentially other human rights. The AI system's role is pivotal in enabling this surveillance and data collection. Therefore, this qualifies as an AI Incident due to violations of human rights and privacy through the use of AI facial recognition systems.
Thumbnail Image

華郵記者揭中國河南民眾賣臉換食用油 訓練AI人臉識別

2022-09-05
RFI
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the development and use of AI facial recognition systems trained on data collected through coercive or exploitative means. The AI system is used by authorities to monitor and discriminate against ethnic minorities, which is a clear violation of human rights. The harm is realized and ongoing, as the system enables pervasive surveillance and categorization of people based on ethnicity and other attributes, leading to social and political harm. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to violations of human rights and harm to communities.
Thumbnail Image

訓練AI人臉識別 中國河南用食油騙取人臉數據 | 兩岸 | 中央社 CNA

2022-09-05
Central News Agency
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI facial recognition systems trained on data collected through deceptive means. The AI system is used by police to monitor and classify ethnic minorities as criminals, which constitutes a violation of human rights and harm to communities. The involvement of AI in surveillance and ethnic profiling directly leads to these harms, qualifying this event as an AI Incident under the OECD framework.
Thumbnail Image

北京為了AI監控無死角 利誘民眾「賣臉」(圖) - 財經新聞 -

2022-09-05
看中国
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI systems for facial recognition to monitor and track individuals without their informed consent, including vulnerable groups such as Uyghurs. The AI system's outputs are used by authorities to identify and classify people, which constitutes a violation of human rights and fundamental rights. The involvement of AI in enabling this mass surveillance and the documented serious human rights abuses linked to it meet the criteria for an AI Incident. The harm is realized, not just potential, and the AI system's role is pivotal in causing these harms.
Thumbnail Image

訓練AI 中國河南食用油騙取人臉識別數據

2022-09-06
Rti 中央廣播電臺
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (facial recognition algorithms) whose development relies on collected facial data obtained through deceptive practices. The AI is used by police to surveil and classify ethnic minorities, which is a clear violation of human rights and fundamental rights, fulfilling the criteria for an AI Incident. The harm is realized and ongoing, not merely potential, as the system is actively used for surveillance and discriminatory alerts. Therefore, this event qualifies as an AI Incident due to direct involvement of AI in causing human rights violations and harm to communities.
Thumbnail Image

訓練AI人臉識別 中國河南用食油騙取人臉數據 | 國際 | 三立新聞網 SETN.COM

2022-09-05
三立新聞
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the development and use of AI facial recognition systems trained on data collected through deceptive means. The AI system is used by police to classify and monitor ethnic minorities as 'innate criminals,' which constitutes a violation of human rights. This harm is realized and ongoing, not merely potential. Therefore, this event qualifies as an AI Incident due to the direct link between AI use and violations of fundamental rights and discriminatory surveillance.
Thumbnail Image

外媒記者親眼見河南民眾賣臉 供AI訓練| 台灣大紀元

2022-09-05
大紀元時報 - 台灣(The Epoch Times - Taiwan)
Why's our monitor labelling this an incident or hazard?
The event involves the use and development of AI systems, specifically facial recognition AI, which is explicitly mentioned. The collection of facial data without clear informed consent and the use of this data for surveillance and monitoring by authorities implicate violations of human rights, particularly privacy rights. The AI system's role is pivotal as it enables mass surveillance and monitoring, which can lead to harm to individuals' rights and freedoms. Therefore, this qualifies as an AI Incident due to the realized harm in terms of human rights violations through surveillance and data collection practices.