Ambient Light Sensors Enable AI-powered Stealthy Spying

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

MIT CSAIL researchers developed a computational imaging algorithm that uses smartphone ambient light sensor data to reconstruct users’ hand gestures and partial images without requiring permissions. This AI-powered technique poses a stealthy surveillance risk, allowing malicious apps to spy on interactions and invade user privacy without consent.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves the use of an AI system (the computational imaging algorithm) that processes data from smartphone light sensors to reconstruct images and track user gestures, which directly leads to a violation of privacy—a human rights concern. The AI system's use in this context enables malicious spying, constituting harm to individuals' privacy rights. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm (privacy violation).[AI generated]
AI principles
Privacy & data governanceRespect of human rightsTransparency & explainabilityRobustness & digital securityAccountabilitySafety

Industries
Digital securityConsumer productsMedia, social platforms, and marketing

Affected stakeholders
Consumers

Harm types
Human or fundamental rightsPsychologicalReputational

Severity
AI incident

AI system task:
Recognition/object detection


Articles about this incident or hazard

Thumbnail Image

Your phone's light sensor can spy on you | Inquirer Technology

2024-01-19
Inquirer
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (the computational imaging algorithm) that processes data from smartphone light sensors to reconstruct images and track user gestures, which directly leads to a violation of privacy—a human rights concern. The AI system's use in this context enables malicious spying, constituting harm to individuals' privacy rights. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm (privacy violation).
Thumbnail Image

Computational photography can capture hand positions and gestures using only an ambient light sensor

2024-01-18
TechSpot
Why's our monitor labelling this an incident or hazard?
The event involves an AI-related computational photography technique that uses algorithms to reconstruct images from ambient light sensor data, which qualifies as an AI system due to the use of computer algorithms for image reconstruction. However, no actual harm has occurred yet; the article discusses a potential privacy threat that could plausibly lead to harm in the future if the technology improves and is maliciously used. Since the harm is not realized but plausible, this fits the definition of an AI Hazard rather than an AI Incident. The article does not focus on societal or governance responses or updates but primarily on the research findings and their implications for privacy risks.
Thumbnail Image

Your phone's light sensor can spy on you | Cebu Daily News

2024-01-19
CDN Digital
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (computational imaging algorithms and machine learning models) to process light sensor data to reconstruct images and videos of user gestures and surroundings, which can be used maliciously to spy on individuals. This constitutes a violation of privacy rights, a form of harm to individuals. The article reports that this capability has been demonstrated by researchers, implying the harm is realized or at least actively exploitable. Therefore, this qualifies as an AI Incident due to the direct link between AI use and privacy harm. The brain wave decoding AI is described as research without reported harm, so it does not affect the classification.
Thumbnail Image

iPhone, Android Ambient Light Sensors Allow Stealthy Spying

2024-01-19
Dark Reading
Why's our monitor labelling this an incident or hazard?
The event involves the use and potential misuse of ambient light sensors, which are AI-adjacent computational imaging systems capable of inferring user gestures and partial images. Although no direct harm has been reported, the demonstrated capability and the permission-free nature of these sensors create a credible risk of privacy violations, including unauthorized surveillance and data capture. This fits the definition of an AI Hazard, as the development and use of these sensors could plausibly lead to an AI Incident involving violations of privacy rights and harm to communities. The article does not describe an actual incident of harm but highlights a significant potential threat and discusses mitigation strategies, so it is not Complementary Information or Unrelated.