Facial Recognition Surveillance Raises Rights Concerns in Chinese City During COVID-19

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Authorities in Ruili, China, deployed facial recognition technology linked to health codes to monitor residents' movements and health status during a COVID-19 outbreak. This AI-driven surveillance has raised significant privacy and human rights concerns, particularly regarding pervasive monitoring and potential targeting of minorities.[AI generated]

Why's our monitor labelling this an incident or hazard?

Facial recognition technology is an AI system used here for real-time monitoring of individuals' health status and movements. Its deployment has led to privacy concerns and potential violations of fundamental rights, which fits the definition of harm under (c) violations of human rights or breach of obligations under applicable law. The article explicitly mentions the use of AI surveillance tools by authorities, the collection and sharing of personal data, and criticism from rights groups about the surveillance's impact on rights. Hence, the event involves the use of an AI system leading to realized harm, qualifying it as an AI Incident.[AI generated]
AI principles
Privacy & data governanceRespect of human rightsFairnessTransparency & explainabilityAccountabilityDemocracy & human autonomy

Industries
Government, security, and defence

Affected stakeholders
General public

Harm types
Human or fundamental rightsPsychological

Severity
AI incident

AI system task:
Recognition/object detection


Articles about this incident or hazard

Thumbnail Image

Facial recognition tech fights coronavirus in Chinese city

2021-07-13
Yahoo
Why's our monitor labelling this an incident or hazard?
Facial recognition technology is an AI system used here for real-time monitoring of individuals' health status and movements. Its deployment has led to privacy concerns and potential violations of fundamental rights, which fits the definition of harm under (c) violations of human rights or breach of obligations under applicable law. The article explicitly mentions the use of AI surveillance tools by authorities, the collection and sharing of personal data, and criticism from rights groups about the surveillance's impact on rights. Hence, the event involves the use of an AI system leading to realized harm, qualifying it as an AI Incident.
Thumbnail Image

Chinese City Using Facial Recognition Tech To Fight Coronavirus

2021-07-13
NDTV
Why's our monitor labelling this an incident or hazard?
Facial recognition technology is an AI system explicitly mentioned as being used for monitoring individuals' health status and movements. The system's use has led to privacy concerns and potential violations of human rights, which constitute harm under the framework. The article reports the system is actively deployed and affecting people, not just a potential risk, so it is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Facial recognition tech rolled out to fight Covid-19 in China's Ruili city

2021-07-13
Hindustan Times
Why's our monitor labelling this an incident or hazard?
Facial recognition technology is an AI system that processes biometric data to identify individuals. Its use here is explicitly linked to tracking movements and health status, which directly impacts individuals' privacy and potentially other human rights. The article reports that this system is actively used and monitored by authorities, with concerns raised by rights groups about surveillance and targeting, indicating realized harm rather than just potential risk. Hence, the event meets the criteria for an AI Incident under violations of human rights or breach of obligations intended to protect fundamental rights.
Thumbnail Image

Facial Recognition Tech Fights Coronavirus In Chinese City - UrduPoint

2021-07-13
UrduPoint
Why's our monitor labelling this an incident or hazard?
Facial recognition is an AI system used here for tracking individuals' health status and movements during a pandemic. Although no explicit harm is reported, the system's operation in public surveillance with unclear data governance poses a credible risk of human rights violations, such as privacy breaches or misuse of personal data. Since the article does not describe actual harm but highlights the deployment of AI surveillance with potential risks, the event fits the definition of an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

COVID-19: Facial recognition tech fights coronavirus in Chinese city

2021-07-13
GULF NEWS
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of facial recognition AI systems linked to health codes for COVID-19 control, confirming AI system involvement. However, it does not describe any realized harm such as injury, rights violations, or other significant harms caused by the AI system. The concerns about data retention and privacy are noted but not reported as actual harms or legal breaches. The event is about the deployment and operational use of AI in pandemic management, which is informative and contextual but does not meet the criteria for an AI Incident or AI Hazard. Hence, it fits the definition of Complementary Information.
Thumbnail Image

Facial Recognition Tech Fights Coronavirus In Chinese City

2021-07-13
International Business Times
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (facial recognition technology) used in the development and deployment phase for public health monitoring. The system's use has directly led to privacy and human rights concerns, as it enables pervasive surveillance and data sharing with law enforcement, which can be considered a violation of fundamental rights. Although the primary stated goal is public health, the described surveillance practices and data handling have caused or are causing harm to individuals' rights. Therefore, this qualifies as an AI Incident due to violations of human rights linked to the AI system's use.
Thumbnail Image

Facial Recognition Tech Fights Covid-19 In Chinese City | Forbes India

2021-07-15
Forbes India
Why's our monitor labelling this an incident or hazard?
Facial recognition is an AI system explicitly mentioned as being used to track people's movements and health status. The use of such surveillance technology can implicate violations of privacy and human rights, especially given the scale and intensity of monitoring described. However, the article does not report any direct or indirect harm occurring from this use, nor does it describe any incident of malfunction or misuse leading to harm. Instead, it reports the deployment of the technology for public health purposes. Therefore, this event represents a plausible risk scenario where AI surveillance could lead to human rights violations or other harms, but no harm is reported as having occurred yet. Hence, it qualifies as an AI Hazard.
Thumbnail Image

Corona tech: Facial recognition tech fights coronavirus in Chinese city

2021-07-13
RTL Today
Why's our monitor labelling this an incident or hazard?
Facial recognition technology combined with health code tracking is an AI system used here for public health monitoring. Its deployment has directly led to concerns about privacy violations and potential breaches of fundamental rights, which fall under harm category (c) - violations of human rights or breach of obligations under applicable law. The article reports the system is actively used and monitored by authorities, indicating realized harm rather than just potential. Hence, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

AFP - Health-virus-China-tech-surveillance

2021-07-13
nampa.org
Why's our monitor labelling this an incident or hazard?
Facial recognition is an AI system used here for health surveillance. While such surveillance can raise concerns about privacy and human rights, the article does not specify any realized harm or legal violations, nor does it highlight risks that could plausibly lead to harm. Therefore, this event does not meet the criteria for an AI Incident or AI Hazard. It is best classified as Complementary Information as it provides context on AI deployment in public health surveillance without reporting harm.
Thumbnail Image

Facial recognition tech fights coronavirus in Chinese city

2021-07-13
Nation
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of facial recognition technology (an AI system) combined with health codes to monitor individuals' movements and health status during a COVID-19 outbreak. This system is actively used, not hypothetical, and has led to privacy concerns and criticism from rights groups about surveillance and potential targeting of minorities, which constitutes a violation of human rights. Therefore, the AI system's use has directly led to harm in terms of rights violations, fitting the definition of an AI Incident.
Thumbnail Image

China utiliza reconocimiento facial contra Covid-19: Pros y contras de esta medida

2021-07-14
El Heraldo de M�xico
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (facial recognition combined with temperature sensing and QR code tracking) used for public health monitoring. There is no explicit report of harm or rights violations occurring yet, but the system's use in surveillance and data collection in a politically sensitive area presents a credible risk of human rights violations or privacy breaches. Since no direct or indirect harm has been reported but plausible future harm exists, the classification as an AI Hazard is appropriate. The article focuses on the deployment and implications of the AI system rather than reporting an incident or a response to an incident, so it is not an AI Incident or Complementary Information.
Thumbnail Image

En China ya usan reconocimiento facial contra el coronavirus: cómo funciona

2021-07-13
Clarin
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (facial recognition combined with temperature detection) in active deployment to monitor and control a public health issue. The AI system's use has directly led to quarantine of infected individuals, which is a health-related intervention. However, the article also highlights concerns about privacy and human rights violations due to surveillance. Since the AI system's use has directly led to actions affecting individuals' rights and privacy, this constitutes a violation of human rights or breach of obligations intended to protect fundamental rights. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to harm in terms of human rights infringement.
Thumbnail Image

China prueba el reconocimiento facial para controlar el coronavirus | Permite vigigilar de manera automática los desplazamientos de una persona

2021-07-13
Página/12
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (facial recognition) for surveillance and health monitoring. While the system is actively used, the article focuses on the implementation and the concerns raised by human rights groups about privacy invasion, rather than reporting actual realized harm or legal breaches. Therefore, it represents a plausible risk of harm (privacy and human rights violations) but no confirmed incident of harm yet. This fits the definition of an AI Hazard, as the AI system's use could plausibly lead to violations of rights and harm to communities in the future.
Thumbnail Image

China utiliza cámaras de reconocimiento facial en una ciudad para identificar casos de COVID-19

2021-07-13
RPP noticias
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of facial recognition cameras and temperature measurement devices managed by local authorities, which are AI systems used to scan individuals' faces and monitor their health status. The deployment of these AI systems directly affects individuals' privacy and freedom of movement, constituting a violation of human rights. The harm is realized as the surveillance is actively ongoing and criticized for privacy invasion. Hence, this is an AI Incident due to the direct use of AI systems causing harm through rights violations.
Thumbnail Image

China tendrá reconocimiento facial para controlar movimiento de personas y combatir COVID-19

2021-07-13
Diario EL PAIS Uruguay
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of facial recognition AI systems to scan individuals' faces and track their movements via QR codes, linking to health data such as temperature. This AI use directly affects individuals' privacy and freedom of movement, which are fundamental human rights. The deployment is already active and impacting residents, not just a potential future risk. Human rights groups criticize this as invasive, indicating recognized harm. Hence, the event meets the criteria for an AI Incident involving violations of human rights due to AI system use.
Thumbnail Image

Una ciudad china usa reconocimiento facial contra el covid-19

2021-07-13
El Observador
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the deployment of facial recognition AI systems to scan individuals entering or leaving certain areas, linking their biometric data to QR codes for automatic tracking. This AI use directly leads to privacy invasions and restrictions on movement, which are violations of fundamental human rights. The harm is realized, not just potential, as the system is actively used and criticized for infringing on citizens' privacy. Hence, it meets the criteria for an AI Incident under violations of human rights.
Thumbnail Image

China busca combatir Covid con cámaras de reconocimiento facial

2021-07-14
Eje Central
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (facial recognition and temperature detection) used by authorities to monitor individuals' movements and health status. This use directly impacts individuals' privacy and can be reasonably considered a violation of human rights, as noted by human rights groups' criticisms. The AI system's use in this context leads to harm in the form of privacy invasion and potential rights violations, fulfilling the criteria for an AI Incident. There is no indication that harm is only potential or that this is a response or update, so it is not a hazard or complementary information. Therefore, the classification is AI Incident.
Thumbnail Image

La ciudad que controla la pandemia con cámaras de reconocimiento facial - El Mercurio de Tamaulipas

2021-07-13
El Mercurio de Tamaulipas
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (facial recognition and temperature measurement cameras) used by authorities to monitor and control people's movements and health status. This use directly affects individuals' rights to privacy and freedom of movement, which are fundamental human rights. The surveillance is active and mandatory for entering or leaving certain areas, indicating realized harm rather than potential harm. The article also mentions criticism from human rights groups about privacy invasion, reinforcing the classification as an AI Incident involving violations of human rights. Hence, the event meets the criteria for an AI Incident under the framework.
Thumbnail Image

Una ciudad china usa reconocimiento facial contra el COVID-19

2021-07-13
telenoche.com.uy
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI-based facial recognition and temperature scanning systems to monitor residents' movements and health status, which qualifies as AI system involvement. The use is for public health monitoring, and while it raises privacy and human rights concerns, no actual harm or violation has been reported or documented in the article. There is no indication that the AI system malfunctioned or caused injury, nor that it led to legal breaches or other harms. The concerns are societal and ethical, but no incident or plausible imminent harm is described. Hence, the event is Complementary Information, providing insight into AI deployment and related societal reactions without constituting an AI Incident or AI Hazard.
Thumbnail Image

China implementa el reconocimiento facial para combatir el covid-19 - Noticias de El Salvador y el Mundo

2021-07-13
Noticias de El Salvador y el Mundo
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of facial recognition cameras (an AI system) to scan people's faces and track their movements and health status via linked QR codes. This surveillance system is used by authorities to control the spread of COVID-19 but raises significant privacy concerns and is criticized by human rights groups for invading citizens' privacy. Since the AI system's use directly leads to a violation of human rights (privacy), this qualifies as an AI Incident under the framework.
Thumbnail Image

Chine : une ville déploie la reconnaissance faciale contre le Covid-19

2021-07-13
Le Figaro.fr
Why's our monitor labelling this an incident or hazard?
The use of facial recognition cameras and temperature scanning AI systems to monitor citizens' movements and health status directly involves AI technology. The deployment aims to control a Covid-19 outbreak, which is a public health issue. However, the article does not report any realized harm such as injury, rights violations, or other negative consequences resulting from this AI use. Instead, it describes a government measure intended to prevent harm (disease spread). While there are potential concerns about privacy and human rights, the article does not explicitly state that such harms have occurred. Therefore, this event is best classified as Complementary Information, as it provides context on AI use in public health surveillance without reporting an AI Incident or AI Hazard.
Thumbnail Image

Chine : la vidéosurveillance utilisée pour lutter contre le Covid-19

2021-07-15
Franceinfo
Why's our monitor labelling this an incident or hazard?
The use of facial recognition AI combined with health data to monitor individuals and enforce isolation measures constitutes the use of an AI system leading to potential violations of human rights, such as privacy and freedom of movement. The system's deployment is actively influencing health outcomes by isolating infected persons, which is a direct use of AI with significant societal impact. Therefore, this qualifies as an AI Incident due to the direct involvement of AI in actions affecting health and rights.
Thumbnail Image

Chine: une ville déploie la reconnaissance faciale contre le Covid

2021-07-13
Ladepeche.fr
Why's our monitor labelling this an incident or hazard?
The use of facial recognition cameras constitutes an AI system as it involves automated recognition and tracking of individuals. The system is used by authorities to monitor movements and potentially restrict or control access to certain areas to prevent disease spread. This use directly impacts individuals' privacy and could be considered a violation of human rights, particularly regarding surveillance and data privacy. However, the article does not report any realized harm such as injury, rights violations formally recognized, or other direct negative consequences resulting from this deployment. Instead, it reports the implementation of the system as a public health measure. Therefore, this event represents a plausible risk of harm related to privacy and rights due to AI surveillance but does not document actual harm yet. Hence, it qualifies as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Covid-19 : une ville chinoise déploie la reconnaissance faciale pour faire face à la pandémie

2021-07-13
SudOuest.fr
Why's our monitor labelling this an incident or hazard?
The use of facial recognition cameras with AI capabilities to scan faces, associate data with unique QR codes, and monitor movements constitutes the use of an AI system. The deployment is intended to manage a public health crisis by tracking infected individuals and their contacts. While the article does not report direct harm caused by the AI system, the use of such surveillance technology raises significant concerns about violations of human rights, particularly privacy and freedom of movement. Given that the AI system's use could plausibly lead to violations of rights or other harms if misused or if surveillance is excessive, this event qualifies as an AI Hazard rather than an AI Incident, as no direct harm is reported yet but there is a credible risk of harm due to the AI system's deployment in this context.
Thumbnail Image

Coronavirus - Chine: la reconnaissance faciale pour lutter contre le Covid

2021-07-13
Le Matin
Why's our monitor labelling this an incident or hazard?
Facial recognition is an AI system used here to monitor and control human movement, which directly impacts individuals' privacy and potentially their rights. The use of such AI surveillance for controlling Covid-19 infections can be seen as a violation of human rights, particularly privacy and freedom of movement. Since the AI system's use has directly led to restrictions on people's movements and surveillance, this constitutes an AI Incident under the category of violations of human rights or breach of obligations intended to protect fundamental rights.
Thumbnail Image

Virus: Une ville déploie la reconnaissance faciale

2021-07-13
Le Journal de Québec
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of facial recognition cameras, which are AI systems, to scan faces and track movements linked to unique QR codes. This is a clear AI system involvement in use. However, the article does not report any direct or indirect harm occurring yet, such as rights violations or health injuries. The system is used for public health purposes, but the surveillance nature and data collection pose plausible risks of human rights violations or misuse. Therefore, it fits the definition of an AI Hazard, where the AI system's use could plausibly lead to harm, but no harm has yet been reported.
Thumbnail Image

Une ville chinoise déploie la reconnaissance faciale contre le Covid

2021-07-13
Metro
Why's our monitor labelling this an incident or hazard?
The use of facial recognition cameras and temperature measurement devices clearly involves AI systems. The deployment is intended to monitor and control the spread of COVID-19, which is a public health issue. However, the article does not report any direct or indirect harm resulting from this AI system's use, such as violations of rights, health injuries caused by the system, or other harms. Instead, it reports the implementation of AI technology for epidemic control. Therefore, this event does not describe an AI Incident or AI Hazard but rather provides information about the use of AI in public health surveillance, which fits the definition of Complementary Information as it enhances understanding of AI applications and governance in the context of COVID-19 control.
Thumbnail Image

Chine: une ville déploie la reconnaissance faciale contre le Covid

2021-07-13
Le Nouvelliste
Why's our monitor labelling this an incident or hazard?
The use of facial recognition technology constitutes an AI system. The deployment is intended to monitor and control the spread of Covid-19, which is a public health issue. However, the article does not report any direct or indirect harm resulting from this AI use, such as violations of rights, health injury caused by the AI system, or other harms. It describes the use of AI systems for public health surveillance, which could raise privacy or rights concerns, but no harm or incident is reported or implied. Therefore, this is not an AI Incident. It also does not describe a plausible future harm or risk scenario beyond the current use, so it is not an AI Hazard. The article provides information about the use of AI systems in a societal context, which is complementary information about AI deployment and governance in public health.
Thumbnail Image

Une ville chinoise déploie la reconnaissance faciale contre le Covid-19

2021-07-16
Next INpact.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of facial recognition cameras, which are AI systems, to scan people's faces in public areas to control the spread of Covid-19. However, there is no mention of any direct or indirect harm resulting from this deployment, such as injury, rights violations, or other harms. The article focuses on the implementation of the technology as a measure against the pandemic, without reporting any realized harm or incidents. Therefore, this event is best classified as Complementary Information, as it provides context on AI use in public health surveillance but does not describe an AI Incident or AI Hazard.
Thumbnail Image

Hi-Tech : Chine : Une ville frontalière déploie la reconnaissance faciale contre la Covid-19

2021-07-13
http://www.dknews-dz.com/
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (facial recognition cameras with temperature measurement) actively used by authorities to monitor and control citizens' movements during a Covid-19 outbreak. This use directly affects individuals' rights to privacy and freedom of movement, which are fundamental human rights. The AI system's deployment leads to a breach of these rights, fulfilling the criterion of harm under violations of human rights or breach of legal protections. Although the harm is not physical injury, the pervasive surveillance and tracking represent a significant and clearly articulated harm caused by the AI system's use. Hence, this qualifies as an AI Incident rather than a hazard or complementary information.