Chinese Employees Bypass AI Facial Recognition Attendance System Using Printed Masks

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

In Wenzhou, China, local government employees exploited a facial recognition attendance AI system by using printed masks of colleagues' faces to fraudulently clock in for absent staff. The misuse, captured on surveillance cameras, undermined workplace integrity and exposed vulnerabilities in the AI system's security.[AI generated]

Why's our monitor labelling this an incident or hazard?

The facial recognition attendance system is an AI system involved in the event. The employees' deliberate misuse of the system to falsify attendance records has directly led to harm in the form of corruption and unfair labor practices. This misuse undermines the integrity of workplace management and violates labor-related obligations. Hence, the event meets the criteria for an AI Incident due to the realized harm caused by the AI system's misuse.[AI generated]
AI principles
Robustness & digital securityAccountability

Industries
Government, security, and defence

Affected stakeholders
WorkersGovernment

Harm types
Economic/PropertyReputationalPublic interest

Severity
AI incident

Business function:
Human resource management

AI system task:
Recognition/object detection


Articles about this incident or hazard

Thumbnail Image

Viral: Chinese employee tricks face detection to mark attendance for multiple colleagues by wearing face masks - The Times of India

2025-12-11
The Times of India
Why's our monitor labelling this an incident or hazard?
The facial recognition attendance system is an AI system involved in the event. The employees' deliberate misuse of the system to falsify attendance records has directly led to harm in the form of corruption and unfair labor practices. This misuse undermines the integrity of workplace management and violates labor-related obligations. Hence, the event meets the criteria for an AI Incident due to the realized harm caused by the AI system's misuse.
Thumbnail Image

The $2 Hack Used By Chinese Workers To Outsmart Facial Recognition And Skip Work

2025-12-10
NDTV
Why's our monitor labelling this an incident or hazard?
The event explicitly involves a facial recognition AI system used for attendance. The employees exploited a vulnerability in the AI system by using printed masks to trick it, which is a misuse of the AI system's outputs. This misuse directly led to harm in the form of workplace fraud and corruption, which can be considered a violation of labor rights and harm to workplace fairness. The AI system's malfunction or inability to detect spoofing was a contributing factor. Hence, the event meets the criteria for an AI Incident as the AI system's use directly led to harm.
Thumbnail Image

Chinese employees caught using printed face masks to trick clocking system and skip work

2025-12-10
Hindustan Times
Why's our monitor labelling this an incident or hazard?
The facial recognition clocking system qualifies as an AI system due to its use of biometric AI for identity verification. The employees' use of printed face masks to trick the system represents misuse of the AI system leading to fraudulent behavior. While this misuse causes harm in terms of workplace fraud and operational disruption, it does not rise to the level of injury, critical infrastructure disruption, or legal rights violations as defined for AI Incidents. Therefore, this event is best classified as an AI Incident due to the realized harm caused by misuse of the AI system in a way that undermines its intended function and trustworthiness.
Thumbnail Image

China public servants use face masks to bypass facial recognition machines, skip work

2025-12-10
South China Morning Post
Why's our monitor labelling this an incident or hazard?
The facial recognition system is an AI system used for attendance. The event involves misuse of the AI system to bypass attendance, which is a misuse of AI in the workplace. However, the article does not report any direct or indirect harm such as injury, rights violations, or significant disruption. The harm is limited to workplace fraud, which is not explicitly listed as a qualifying harm under the AI Incident definition. There is no indication of plausible future harm beyond the current misuse. Thus, the event is best classified as Complementary Information highlighting challenges and misuse of AI systems in practice.
Thumbnail Image

China public servants use face masks to bypass facial recognition to help each other skip work

2025-12-11
The Star
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system—facial recognition used for clocking in. The misuse (wearing printed face masks to fool the system) directly leads to harm by enabling employees to skip work dishonestly, which is a violation of labor rights and workplace rules. The harm is realized and documented, with surveillance footage confirming the malpractice. Hence, it meets the criteria for an AI Incident as the AI system's use has directly led to a breach of labor rights and workplace integrity.