Android Facial Recognition Flaw Allows Unauthorized Access via Photos

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Consumer group Which? found that 64% of Android smartphones tested since 2022 can be unlocked using a printed photo, exposing a major security flaw in AI-based facial recognition systems. This vulnerability affects flagship models and risks user privacy and data security in the UK.[AI generated]

Why's our monitor labelling this an incident or hazard?

The facial recognition systems are AI systems that infer from biometric input to authenticate users. The article documents that 21 phone models' facial recognition can be spoofed by simple printed photos, which directly compromises user security and privacy. This is a realized harm scenario, not just a potential hazard, as it enables unauthorized access to personal data and accounts. The lack of adequate warnings exacerbates the harm. Hence, this event meets the criteria for an AI Incident due to direct harm caused by the AI system's malfunction in security authentication.[AI generated]
AI principles
Privacy & data governanceRobustness & digital security

Industries
Consumer productsDigital security

Affected stakeholders
Consumers

Harm types
Human or fundamental rights

Severity
AI incident

AI system task:
Recognition/object detection


Articles about this incident or hazard

Thumbnail Image

Is YOUR phone safe? Facial recognition on 21 devices can be spoofed

2026-04-16
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
The facial recognition systems are AI systems that infer from biometric input to authenticate users. The article documents that 21 phone models' facial recognition can be spoofed by simple printed photos, which directly compromises user security and privacy. This is a realized harm scenario, not just a potential hazard, as it enables unauthorized access to personal data and accounts. The lack of adequate warnings exacerbates the harm. Hence, this event meets the criteria for an AI Incident due to direct harm caused by the AI system's malfunction in security authentication.
Thumbnail Image

Brits warned face unlock can be 'fooled by printed pics' on 133 popular mobiles

2026-04-16
The Sun
Why's our monitor labelling this an incident or hazard?
The facial recognition system is an AI system that is malfunctioning or insufficiently secure, leading to a plausible risk of unauthorized access (harm). The article reports lab testing showing the vulnerability but does not describe actual incidents of harm occurring. The warnings and manufacturer responses indicate awareness of the risk but no realized harm. Thus, the event is best classified as an AI Hazard, reflecting the plausible future harm from the AI system's weaknesses rather than an AI Incident or Complementary Information.
Thumbnail Image

Why using face unlock on a phone could be risking your data - Which?

2026-04-17
Which?
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly: facial recognition AI used in smartphones for unlocking. The article documents that these AI systems have been tested and found to be easily fooled by photos, leading to unauthorized access to phones and personal data. This constitutes a direct harm to users' privacy and data security, which falls under violations of rights and harm to individuals. The article also discusses manufacturers' inadequate warnings and the ongoing prevalence of this issue, confirming that the harm is occurring and not just a potential risk. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Is your smartphone safe? Face ID on these Android devices can be tricked with images

2026-04-17
Digit
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses the use of AI-based facial recognition systems (Face ID) on smartphones and their failure to securely authenticate users, leading to unauthorized access via photographs. This is a direct harm to users' privacy and security, which falls under harm to persons. The AI system's malfunction (inability to distinguish real faces from photos) is the cause of the harm. The involvement of AI is clear, and the harm is realized, not just potential. Hence, the event is classified as an AI Incident.
Thumbnail Image

Facial recognition security fooled by photo on majority of Android phones - Tech Digest

2026-04-16
Tech Digest
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system—facial recognition software on smartphones—that is malfunctioning by failing to reliably distinguish between a real face and a 2D photo. This malfunction directly leads to security breaches, allowing unauthorized access to personal data, which is a violation of privacy rights. The harm is realized and documented through extensive testing. The involvement of AI in the system's operation and the resulting security failure meet the criteria for an AI Incident rather than a hazard or complementary information.