Smart Locks' Facial Recognition Vulnerabilities Exposed in China

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Consumer associations in Beijing, Tianjin, and Hebei tested 30 smart lock models and found that three facial recognition locks could be easily unlocked with photos, revealing serious AI anti-spoofing flaws. Additional risks include unencrypted data transmission and easily copied IC cards, posing threats to property and privacy.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves AI systems explicitly through the use of facial recognition technology in smart locks. The malfunction or inadequacy of the AI system's liveness detection and anti-spoofing features has directly led to security vulnerabilities that allow unauthorized access (harm to property and privacy). The article describes actual security incidents (successful unlocking with photos) and risks of data interception, constituting realized harms. Therefore, this qualifies as an AI Incident due to the direct link between AI system malfunction and harm.[AI generated]
AI principles
Privacy & data governanceRobustness & digital security

Industries
Consumer productsDigital security

Affected stakeholders
Consumers

Harm types
Economic/PropertyHuman or fundamental rights

Severity
AI incident

AI system task:
Recognition/object detection


Articles about this incident or hazard

Thumbnail Image

照片也能开锁!智能门锁藏隐患

2026-04-17
China News
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly through the use of facial recognition technology in smart locks. The malfunction or inadequacy of the AI system's liveness detection and anti-spoofing features has directly led to security vulnerabilities that allow unauthorized access (harm to property and privacy). The article describes actual security incidents (successful unlocking with photos) and risks of data interception, constituting realized harms. Therefore, this qualifies as an AI Incident due to the direct link between AI system malfunction and harm.
Thumbnail Image

照片也能"以假乱真"!消协提醒:避免购买"三无"智能门锁

2026-04-16
东方财富网
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly through the facial recognition technology in smart locks. The malfunction or insufficient anti-spoofing capability of these AI systems has directly led to security vulnerabilities that can cause harm to property and personal safety. The article documents realized security risks (harm) due to the AI system's failure to prevent photo-based spoofing, which is a direct AI Incident as per the definitions. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

京津冀三地消协组织提示智能门锁多项安全隐患问题

2026-04-17
东方财富网
Why's our monitor labelling this an incident or hazard?
The event involves AI systems as smart locks use AI technologies such as facial recognition and encrypted data transmission for authentication and remote control. The reported security flaws in AI components (facial recognition anti-spoofing failure) and data handling (unencrypted transmission) have directly led to realized harms, including unauthorized unlocking and privacy risks, which constitute harm to persons and communities. Therefore, this qualifies as an AI Incident because the development and use of AI systems in these smart locks have directly led to security harms. The article does not merely warn of potential risks but documents actual vulnerabilities and successful exploits, confirming realized harm.
Thumbnail Image

防君子不防小人!实测:多款智能门锁一张照片就能轻松骗开

2026-04-16
驱动之家
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems in the form of facial recognition technology used in smart locks. The vulnerabilities allow unauthorized unlocking using photos, which directly leads to harm to property and personal security. Additionally, unencrypted data transmission poses privacy risks. These constitute direct harms caused by the AI system's malfunction or design flaws. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to harm.
Thumbnail Image

京津冀消协替您测:3款智能门锁,被照片轻易"骗"开了

2026-04-16
news.bjd.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves AI systems because facial recognition technology is an AI system used for unlocking. The article explicitly states that three smart locks with facial recognition can be easily fooled by photos, indicating a malfunction or weakness in the AI system's anti-spoofing capabilities. This has directly led to security harms (unauthorized access), which is harm to property and user privacy. Additionally, the unencrypted transmission of sensitive data increases risk of unauthorized control. Therefore, this qualifies as an AI Incident due to realized harm caused by AI system vulnerabilities in use.