Meta's AI Smart Glasses Lead to Worker Harm and Privacy Violations in Kenya

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Meta terminated its contract with Kenyan firm Sama after over 1,100 workers, who trained AI systems using footage from Ray-Ban smart glasses, reported exposure to graphic and private content. The layoffs followed whistleblowing about privacy violations and poor labor conditions, raising concerns over AI training practices and worker well-being.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event explicitly involves AI systems (Meta's smart glasses and associated AI training processes). The development and use of these AI systems required human review of sensitive personal data, which led to privacy harms and labor rights violations. The firing of workers after they spoke out suggests potential retaliation, further implicating labor rights issues. Regulatory investigations confirm the recognition of these harms. Therefore, this event meets the definition of an AI Incident due to direct and indirect harms caused by the AI system's use and associated labor practices.[AI generated]
AI principles
Privacy & data governanceHuman wellbeing

Industries
Consumer products

Affected stakeholders
WorkersGeneral public

Harm types
PsychologicalHuman or fundamental rights

Severity
AI incident

Business function:
Research and development

AI system task:
Recognition/object detection


Articles about this incident or hazard

Thumbnail Image

Dispute over fate of Kenyan workers who saw Meta AI glasses films

2026-04-30
BBC
Why's our monitor labelling this an incident or hazard?
The article discusses the use of AI-related technology (smart glasses) and human content review, with past legal actions concerning harm to workers (trauma) and labor rights. While these issues relate to AI systems and their use, the article centers on the dispute and legal proceedings rather than a new incident or hazard. It provides complementary information about ongoing societal and governance challenges in AI deployment, fitting the definition of Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

Human AI trainers saw Meta AI Glass users having sex, now there is scandal over their firing

2026-04-30
India Today
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (Meta's smart glasses and associated AI training processes). The development and use of these AI systems required human review of sensitive personal data, which led to privacy harms and labor rights violations. The firing of workers after they spoke out suggests potential retaliation, further implicating labor rights issues. Regulatory investigations confirm the recognition of these harms. Therefore, this event meets the definition of an AI Incident due to direct and indirect harms caused by the AI system's use and associated labor practices.
Thumbnail Image

Meta under scrutiny after ending AI training contract amid worker exposure allegations

2026-04-30
Firstpost
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI system use (content review to improve AI performance) and describes direct harm to workers exposed to graphic and private content, which is a violation of rights and privacy. The involvement of data protection authorities and legal actions further supports the classification as an AI Incident. The harm is realized, not just potential, and stems from the AI system's use and its operational practices. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Meta cuts contractors who reported seeing Ray-Ban Meta users have sex

2026-04-30
Ars Technica
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used by Meta for data annotation and AI content review related to Ray-Ban Meta smart glasses. The contractors viewed sensitive, private footage, some recorded without clear user consent or awareness, leading to privacy violations and legal actions. The harm is realized, as evidenced by the class-action lawsuit and regulatory investigations. The AI system's use in processing this data is directly linked to these harms, fulfilling the criteria for an AI Incident under violations of human rights and privacy.
Thumbnail Image

Meta's creepiest lawsuit in recent years will make you rethink its AI smart glasses

2026-04-30
Digital Trends
Why's our monitor labelling this an incident or hazard?
The event describes direct harm to workers involved in AI training (exposure to traumatic content and job loss), which is linked to the use of an AI system (Meta's smart glasses and AI training). The privacy violations and potential misuse of the glasses also constitute violations of rights. The involvement of regulatory investigations and legal cases confirms the materialization of harm. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Meta contractor fires 1,100 AI trainers after they revealed Ray-Ban glasses recorded private and intimate footage

2026-05-01
TechSpot
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (Meta's smart glasses with AI assistants) whose use led to the recording of private, intimate, and sensitive footage without informed consent, violating privacy rights. The involvement of human contractors in labeling this data under poor labor conditions and subsequent retaliation against whistleblowers indicates violations of labor rights. These harms fall under violations of human rights and harm to communities. Since the AI system's use directly caused these harms, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Meta in row after workers who say they saw smart glasses users having sex lose jobs - MyJoyOnline

2026-04-30
MyJoyOnline.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (Meta's AI-powered smart glasses and AI content review processes). The workers' exposure to graphic, non-consensual content and the subsequent job losses after raising concerns indicate direct and indirect harm related to privacy violations and labor rights. The involvement of data protection authorities and ongoing investigations further supports the classification as an AI Incident. The harm is realized, not merely potential, and stems from the AI system's use and associated human review processes.
Thumbnail Image

Meta AI Controversy: Row Erupts As Kenyan Workers Who Say They Saw Smart Glasses Users Having S*x Lose Jobs | 📲 LatestLY

2026-04-30
LatestLY
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (Meta's AI-powered smart glasses) and their development/use (training AI systems with user-captured content). The workers' exposure to graphic and private content constitutes harm to their health and well-being (a form of injury or harm to persons). Additionally, there are potential violations of privacy rights (human rights breach). The job losses and regulatory scrutiny further underscore the materialized harm. Therefore, this qualifies as an AI Incident because the AI system's use has directly and indirectly led to harm.
Thumbnail Image

Meta's Kenya Exit Sparks AI Labor Crisis, Exposes Hidden Human Cost of Smart Tech

2026-04-30
International Business Times, Singapore Edition
Why's our monitor labelling this an incident or hazard?
The article describes how the development and use of AI systems (through data annotation and content moderation) have directly led to harm in the form of labor rights violations, job insecurity, and psychological harm to workers. The involvement of AI systems is explicit, as the workers were annotating data to train Meta's AI. The layoffs and exposure to disturbing content are consequences of the AI system's development and use. This meets the criteria for an AI Incident because it involves harm to labor rights and worker health directly linked to AI system use.