Meta Ray-Ban Smart Glasses Hacked for Covert Surveillance

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Meta's Ray-Ban smart glasses, equipped with AI features and a visible LED to indicate recording, have been hacked by an American tinkerer, Bong Kim, to disable the LED. This modification, sold online, enables secret video recording, leading to significant privacy violations and potential human rights concerns.[AI generated]

Why's our monitor labelling this an incident or hazard?

The Ray-Ban Meta glasses are AI systems capable of recording video and potentially facial recognition. The event involves the modification of these AI systems to disable the LED recording indicator, allowing secret recording without consent. This directly leads to violations of privacy and human rights, fulfilling the criteria for harm under the AI Incident definition. The harm is realized, not just potential, as the glasses are already being modified and sold for covert surveillance. Hence, this is an AI Incident.[AI generated]
AI principles
Privacy & data governanceRespect of human rightsRobustness & digital securityTransparency & explainability

Industries
Consumer productsDigital security

Affected stakeholders
General public

Harm types
Human or fundamental rights

Severity
AI incident

AI system task:
Recognition/object detection


Articles about this incident or hazard

Thumbnail Image

Meta Ray-Ban : les lunettes IA ont été jailbreak pour filmer en mode espion

2025-10-27
LEBIGDATA.FR
Why's our monitor labelling this an incident or hazard?
The Ray-Ban Meta glasses are AI systems capable of recording video and potentially facial recognition. The event involves the modification of these AI systems to disable the LED recording indicator, allowing secret recording without consent. This directly leads to violations of privacy and human rights, fulfilling the criteria for harm under the AI Incident definition. The harm is realized, not just potential, as the glasses are already being modified and sold for covert surveillance. Hence, this is an AI Incident.
Thumbnail Image

Les lunettes connectées de Meta ont déjà été piratées pour permettre de filmer de manière sauvage

2025-10-24
Frandroid
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Meta's smart glasses with AI-enabled camera and facial recognition capabilities). The hacking disables the LED indicator, allowing covert recording, which directly leads to violations of privacy rights, a breach of fundamental rights. The harm is realized as the modified glasses are already being sold and used for secret recording. This fits the definition of an AI Incident because the AI system's use (misuse) has directly led to harm (violation of privacy and rights).
Thumbnail Image

Une modification à 60 dollars des lunettes Ray-Ban de Meta désactive leur voyant d'enregistrement protégeant la vie privée, un amateur propose de désactiver la LED qui s'allume lorsque l'utilisateur enregistre

2025-10-24
Developpez.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Meta's Ray-Ban smart glasses) that uses AI for recording and potentially facial recognition. The modification disables the LED indicator meant to alert bystanders when recording is taking place, removing a critical privacy protection. This enables covert recording, which is a violation of privacy rights and can cause harm to individuals. The harm is realized, not just potential, as the modification is actively sold and used. Hence, the event meets the criteria for an AI Incident because the AI system's use and modification have directly led to violations of human rights (privacy) and harm to communities.
Thumbnail Image

Les lunettes Ray-Ban Meta déjà transformées en outil d'espionnage

2025-10-27
Génération-NT
Why's our monitor labelling this an incident or hazard?
The Ray-Ban Meta glasses are AI systems due to their smart, connected features including video recording and potential facial recognition. The event involves the use and malfunction (via tampering) of the AI system leading directly to harm: covert surveillance violating privacy and potentially human rights. The modification disables a safety feature designed to alert people, enabling spying without consent. This is a direct violation of rights and a clear AI Incident under the framework, as the AI system's malfunction (caused by tampering) leads to realized harm to individuals' privacy and rights.