Meta Quest Pro's AI-Powered Eye and Face Tracking Raises Privacy and Human Rights Concerns

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Meta's new Quest Pro VR headset uses AI-driven eye and face tracking to collect biometric and emotional data for targeted advertising and avatar realism. This practice raises significant privacy and human rights concerns, as sensitive user data may be processed and shared, potentially violating fundamental rights and data protection laws.[AI generated]

Why's our monitor labelling this an incident or hazard?

The Meta Quest Pro employs AI systems for facial and eye tracking, which collect sensitive personal data. The article raises serious privacy concerns and potential risks of misuse or data breaches but does not report any realized harm or incident. Therefore, this situation fits the definition of an AI Hazard, as the AI system's use could plausibly lead to privacy harms in the future, but no direct or indirect harm has yet occurred or been documented in the article.[AI generated]
AI principles
Privacy & data governanceRespect of human rights

Industries
Consumer products

Affected stakeholders
Consumers

Harm types
Human or fundamental rights

Severity
AI hazard

Business function:
Marketing and advertisement

AI system task:
Recognition/object detectionOrganisation/recommendersContent generation


Articles about this incident or hazard

Thumbnail Image

Meta's New VR Headset Will Track Emotions Using Eye Movements For Targeting Ads

2022-10-15
IndiaTimes
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (eye-tracking and emotion recognition AI) used in the Meta Quest Pro headset. The AI system's use directly leads to harm in the form of privacy violations and potential breaches of fundamental rights related to personal data and emotional privacy. The AI system's development and use for targeted advertising based on emotional tracking is a clear breach of obligations intended to protect fundamental rights. Hence, this qualifies as an AI Incident under the framework's definition of violations of human rights or breach of obligations under applicable law protecting fundamental rights.
Thumbnail Image

New Meta Quest Pro report raises serious privacy concerns -- here's why

2022-10-14
Tom's Guide
Why's our monitor labelling this an incident or hazard?
The Meta Quest Pro employs AI systems for facial and eye tracking, which collect sensitive personal data. The article raises serious privacy concerns and potential risks of misuse or data breaches but does not report any realized harm or incident. Therefore, this situation fits the definition of an AI Hazard, as the AI system's use could plausibly lead to privacy harms in the future, but no direct or indirect harm has yet occurred or been documented in the article.
Thumbnail Image

meta: Meta may use the Quest Pro's eye-tracking to serve ads: What is the updated privacy policy

2022-10-14
Gadget Now
Why's our monitor labelling this an incident or hazard?
The article involves AI systems (eye-tracking and facial expression recognition) used in the Quest Pro headset to collect data that could be used for targeted advertising. While this raises privacy concerns and potential future harms related to user profiling and intrusive advertising, no direct or indirect harm has been reported as having occurred. Therefore, this situation represents a plausible future risk rather than an actual incident. The content primarily informs about the updated privacy policy and potential uses of AI-collected data, fitting the definition of Complementary Information rather than an AI Incident or Hazard.
Thumbnail Image

Meta's New Headset Will Track Your Eyes for Targeted Ads

2022-10-13
Gizmodo
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (eye and face tracking with emotion recognition) used in the Meta Quest Pro headset. The use of this AI system for targeted advertising based on biometric data raises significant privacy and human rights concerns. Although no specific incident of harm is reported, the plausible risk of violations of biometric privacy laws and potential misuse of sensitive data constitutes a credible future harm. Hence, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because the AI system and its implications are central to the event.
Thumbnail Image

Meta may use the Quest Pro's eye-tracking to serve ads because of course it will

2022-10-13
Android Authority
Why's our monitor labelling this an incident or hazard?
The article describes Meta's collection and potential use of eye-tracking and full-body tracking data to personalize user experiences, which likely includes targeted advertising. This involves an AI system (eye-tracking and data analysis AI) used in the development and use phases. While the article does not report any realized harm such as privacy breaches or unauthorized data use, it highlights a credible risk of violation of user privacy and rights if such data is used for invasive ad targeting without proper consent or safeguards. Since no direct harm has yet occurred but plausible future harm related to rights violations is credible, this qualifies as an AI Hazard rather than an AI Incident.
Thumbnail Image

Meta 想把 Quest Pro 的元宇宙眼球追蹤技術用在廣告上,因為它可以?

2022-10-17
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (eye-tracking and facial expression recognition integrated into the Quest Pro headset) used to collect sensitive user data for targeted advertising. This use of AI directly implicates potential violations of privacy and human rights, as it enables unprecedented surveillance and behavioral analysis without clear user consent or robust regulation. Although no specific harm incident is reported yet, the plausible future harm of privacy violations and misuse of personal data in advertising contexts is credible and significant, especially given Meta's history and the current regulatory gaps in metaverse technologies. Therefore, this situation qualifies as an AI Hazard due to the credible risk of harm stemming from the AI system's use in advertising and user data collection.