Meta Sued Over AI-Driven Content Moderation Harm in Kenya

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Meta and its subcontractor Sama face lawsuits from over 180 former Facebook content moderators in Kenya, who allege severe mental health harm from reviewing violent and hateful content flagged by AI systems. The legal actions highlight inadequate support and poor working conditions linked to AI-assisted moderation.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly involves AI systems in the form of content moderation technology used by Meta and its subcontractor. The moderators' work involves reviewing content flagged or filtered by AI systems, exposing them to harmful material. The harm is psychological injury to the moderators, which is a direct health harm caused by the AI system's use in content moderation. The legal actions and complaints about poor working conditions and mental health support further confirm the harm. Hence, this is an AI Incident involving direct harm to people due to the use of AI systems.[AI generated]
AI principles
AccountabilityHuman wellbeingSafetyRespect of human rights

Industries
Media, social platforms, and marketing

Affected stakeholders
Workers

Harm types
PsychologicalHuman or fundamental rights

Severity
AI incident

Business function:
Monitoring and quality control

AI system task:
Recognition/object detection


Articles about this incident or hazard

Thumbnail Image

'No job for humans': the harrowing work of content moderators in Kenya

2023-06-15
Economic Times
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems in the form of content moderation technology used by Meta and its subcontractor. The moderators' work involves reviewing content flagged or filtered by AI systems, exposing them to harmful material. The harm is psychological injury to the moderators, which is a direct health harm caused by the AI system's use in content moderation. The legal actions and complaints about poor working conditions and mental health support further confirm the harm. Hence, this is an AI Incident involving direct harm to people due to the use of AI systems.
Thumbnail Image

Meta faces Kenya legal offensive by content moderators - Africa - World

2023-06-15
جريدة الأهرام
Why's our monitor labelling this an incident or hazard?
The event explicitly involves content moderation on Facebook, which is known to use AI systems to assist in identifying and removing harmful content. The lawsuits allege harm to the mental health of moderators due to exposure to toxic content, unfair labor practices, and failure to prevent online hate speech that has led to real-world violence. These harms fall under injury to health, violations of labor rights, and harm to communities, all linked to the use and management of AI-assisted content moderation systems. The court rulings and ongoing litigation confirm the direct or indirect role of AI systems in causing these harms. Hence, this is classified as an AI Incident.
Thumbnail Image

'No job for humans': The harrowing work of content moderators in Kenya

2023-06-15
The Straits Times
Why's our monitor labelling this an incident or hazard?
The event describes human content moderators working alongside AI systems that flag harmful content on Facebook. The moderators' exposure to traumatic content and subsequent mental health harm is directly linked to the AI system's role in content moderation. This constitutes harm to persons (mental health injury) caused indirectly by the AI system's use. Therefore, this qualifies as an AI Incident under the definition of harm to health caused by the use of AI systems.
Thumbnail Image

Kenya: Former META employees share experience ahead of trial

2023-06-15
Africanews
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems for content moderation, where human moderators are exposed to harmful content as part of the AI system's operation. The moderators suffer mental health harms due to the nature of the content they must review, which is a direct injury to persons. The article details realized harm, legal complaints, and the role of the AI system's use in causing this harm. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

'No job for humans': the harrowing work of content moderators in Kenya

2023-06-15
RTL Today
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems for content moderation, where human moderators review content flagged or filtered by AI. The moderators' exposure to harmful content and the resulting mental health damage is a direct harm linked to the AI system's use. The article details realized harm (mental health injury) caused by the AI system's operation and the working conditions imposed. This fits the definition of an AI Incident because the AI system's use has directly led to injury or harm to a group of people (the moderators).
Thumbnail Image

Moderadores de Facebook revelan horrendas publicaciones que deben ver

2023-06-15
Milenio.com
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems in Facebook's content moderation process, where algorithms filter or prioritize problematic content for human moderators. The moderators suffer significant mental health harm (post-traumatic stress symptoms) due to exposure to violent and disturbing content that the AI system surfaces. This is a direct or indirect harm to health caused by the AI system's use. The event meets the definition of an AI Incident because the AI system's use has directly or indirectly led to injury or harm to persons (moderators). The article does not describe a potential future harm or a governance response but reports actual harm experienced by humans due to the AI system's role in content moderation.
Thumbnail Image

Facebook y la denuncia de sus moderadores en África: un trabajo 'que no es para humanos' | RPP Noticias

2023-06-15
RPP noticias
Why's our monitor labelling this an incident or hazard?
The article details real, ongoing harm to human moderators who must view and filter harmful content on Meta's platform, which is an AI system for content distribution and moderation. The moderators' psychological injuries are a direct consequence of their work in the AI system's operational environment. The harm is realized and significant, fulfilling the criteria for an AI Incident. The involvement of AI is inferred from the context of content moderation on a major social media platform, which relies on AI systems to flag or route content for human review. The harm is not speculative or potential but actual and ongoing, thus not a hazard or complementary information. The event is not unrelated because it concerns the use of AI systems in content moderation and the resulting harm to humans.
Thumbnail Image

Moderadores de Facebook de Kenia dicen que su trabajo "no es para humanos" - Diario La Tribuna

2023-06-15
Diario La Tribuna
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems as part of Facebook's content moderation process, which includes AI-assisted verification. The harm described is to the mental health of human moderators who are exposed to violent and disturbing content as part of their job, which is directly linked to the use and operation of AI systems in content filtering and moderation. The harm is realized and significant, fulfilling the criteria for an AI Incident under harm to health of persons. Although the moderators are human, the AI system's role in content moderation and the resulting exposure and harm to these workers is pivotal. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

"Los humanos hacen cosas que nunca habría imaginado": El traumático trabajo de los moderadores en redes sociales

2023-06-15
noticias.unitel.bo
Why's our monitor labelling this an incident or hazard?
The article involves AI systems indirectly as part of content moderation processes but focuses on the human moderators' traumatic experiences and legal actions concerning their working conditions. There is no direct or indirect harm caused by AI system malfunction or misuse. The harms are to human moderators' mental health due to exposure to violent content, not due to AI system outputs or failures. The article also discusses Meta's statements about AI's role and support measures, which are governance and societal responses. Hence, it fits the definition of Complementary Information, providing context and updates related to AI ecosystem impacts and responses, rather than describing a new AI Incident or Hazard.
Thumbnail Image

Facebook content moderators in Kenya call the work 'torture.' Their...

2023-06-29
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
The article describes how Facebook's AI-enabled content moderation system requires human moderators to review harmful content flagged by AI. The moderators suffer psychological harm due to exposure to traumatic content, and the lawsuit alleges inadequate support and poor working conditions. The AI system's use in content moderation is central to the harm experienced, fulfilling the criteria for an AI Incident involving injury or harm to a group of people. The event is not merely a hazard or complementary information, as the harm is realized and directly linked to the AI system's use.
Thumbnail Image

Kenyan Facebook Content Moderators: Job Is 'Torture'

2023-06-29
Newser
Why's our monitor labelling this an incident or hazard?
The event involves human content moderators reviewing user-generated content on Facebook, which is typically supported by AI systems for content filtering and prioritization. However, the harm described is psychological injury to human moderators due to exposure to disturbing content, not directly caused by an AI system's malfunction or use. The AI system's role is indirect or supportive, and the harm arises from the nature of the content rather than AI system failure or misuse. Therefore, this does not meet the criteria for an AI Incident or AI Hazard. The event is primarily about labor rights and working conditions related to AI-supported content moderation, making it Complementary Information as it provides context on societal and labor impacts related to AI ecosystem operations.
Thumbnail Image

"It was just a torture for us": Content moderators in Kenya sue Facebook

2023-06-29
NBC Chicago
Why's our monitor labelling this an incident or hazard?
The event clearly involves AI systems used for content moderation, as Facebook employs AI to flag content for human review. The moderators' exposure to harmful content and the resulting psychological trauma constitute injury to health. The lawsuit alleges insufficient mental health support and poor working conditions related to the AI-enabled moderation process. Although the harm is to human moderators rather than end users, it is a direct consequence of the AI system's use. Hence, this is an AI Incident involving harm to a group of people due to the development and use of AI systems for content moderation.
Thumbnail Image

Facebook content moderators in Kenya call the work 'torture.' Their lawsuit may ripple worldwide

2023-06-29
The Buffalo News
Why's our monitor labelling this an incident or hazard?
The content moderators are employed to review and remove harmful content on Facebook, a process that involves AI systems to identify and flag such content for human review. The moderators' exposure to traumatic content and the resulting psychological harm is directly linked to the use of AI systems in content moderation workflows. The harm to the moderators' health and labor rights constitutes an AI Incident because the AI system's use in content moderation has directly led to injury or harm to a group of people. The lawsuit and described conditions demonstrate realized harm rather than potential harm, thus qualifying as an AI Incident rather than a hazard or complementary information.