
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
Meta's new Quest Pro VR headset uses AI-driven eye and face tracking to collect biometric and emotional data for targeted advertising and avatar realism. This practice raises significant privacy and human rights concerns, as sensitive user data may be processed and shared, potentially violating fundamental rights and data protection laws.[AI generated]
Why's our monitor labelling this an incident or hazard?
The Meta Quest Pro employs AI systems for facial and eye tracking, which collect sensitive personal data. The article raises serious privacy concerns and potential risks of misuse or data breaches but does not report any realized harm or incident. Therefore, this situation fits the definition of an AI Hazard, as the AI system's use could plausibly lead to privacy harms in the future, but no direct or indirect harm has yet occurred or been documented in the article.[AI generated]