Exploitation and Poor Working Conditions for AI Data Annotators in Kenya and Colombia

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Thousands of data annotators in Kenya and Colombia, essential for training generative AI systems, face exploitative conditions, including low pay, psychological distress, and lack of legal protections. Their work involves labeling graphic content for AI development, with little recognition or support, highlighting systemic labor rights violations linked to AI advancement.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly involves AI systems (generative AI) that require human data labeling. The harms described are direct and ongoing: psychological distress, physical ailments, and labor rights violations suffered by the annotators. These harms stem from the AI system's development and use, as the annotators' work is essential to train and validate the AI. The lack of legal protections and poor working conditions constitute violations of labor rights and harm to health, fitting the AI Incident definition. Although the article also discusses the absence of regulation, the realized harms to workers make this an incident rather than a hazard or complementary information.[AI generated]
AI principles
AccountabilityFairnessHuman wellbeingRespect of human rights

Industries
IT infrastructure and hosting

Affected stakeholders
Workers

Harm types
Economic/PropertyPsychologicalHuman or fundamental rights

Severity
AI incident

Business function:
Research and development

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

ظروف عمل سيئة لـ جنود الذكاء الاصطناعي المجهولين

2025-10-16
https://www.alanba.com.kw/newspaper/
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (generative AI) that require human data labeling. The harms described are direct and ongoing: psychological distress, physical ailments, and labor rights violations suffered by the annotators. These harms stem from the AI system's development and use, as the annotators' work is essential to train and validate the AI. The lack of legal protections and poor working conditions constitute violations of labor rights and harm to health, fitting the AI Incident definition. Although the article also discusses the absence of regulation, the realized harms to workers make this an incident rather than a hazard or complementary information.
Thumbnail Image

ظروف عمل سيئة لـ جنود الذكاء الاصطناعي المجهولين

2025-10-16
https://www.alanba.com.kw/newspaper/
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in the sense that human workers label data to train generative AI models, which fits the definition of AI systems. However, the article focuses on the labor conditions of these human workers rather than any harm caused by the AI systems' development, use, or malfunction. There is no indication that the AI systems have caused injury, rights violations, or other harms as defined. The concerns are about labor rights and working conditions, which are important but do not constitute an AI Incident or AI Hazard under the given framework. The article is best classified as Complementary Information because it provides context and societal response information about the AI ecosystem, specifically the human labor behind AI training data, without describing a new AI Incident or Hazard.
Thumbnail Image

ظروف عمل سيئة لجنود الذكاء التوليدي المجهولين

2025-10-16
France 24
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems, specifically generative AI and machine learning models that require human-labeled data. The harms described relate to labor rights violations, poor working conditions, and psychological harm to workers who support AI development. However, these harms are indirect and systemic rather than a discrete incident caused by AI malfunction or misuse. There is no report of a specific event where AI use directly caused injury or legal violations beyond labor exploitation. The article focuses on raising awareness, describing the workforce behind AI, and ongoing efforts to improve labor conditions, which fits the definition of Complementary Information as it informs about societal and governance responses and the broader AI ecosystem. It does not describe a new AI Incident or AI Hazard.
Thumbnail Image

جنود الذكاء التوليدي المجهولون.. عمال في الظل يطالبون بظروف إنسانية

2025-10-16
صحيفة الاقتصادية
Why's our monitor labelling this an incident or hazard?
The event involves the development and use of AI systems, specifically generative AI, which relies heavily on data annotation by human workers. The workers suffer from labor rights violations and health harms due to poor working conditions, which are directly linked to the AI system development process. This constitutes a violation of labor rights and harm to the health of groups of people, fitting the definition of an AI Incident. The article documents realized harm rather than potential harm, so it is not a hazard or complementary information.
Thumbnail Image

جنود الذكاء الصناعي المجهولون.. أشباح في الظل.. ومعاناة من "العبودية الحديثة"

2025-10-16
Alwasat News
Why's our monitor labelling this an incident or hazard?
The article explicitly links the development and use of generative AI systems to the exploitation and harm of human workers who train these AI systems. The harms include physical and mental health problems and labor rights violations, which fall under the definition of AI Incident (harm to health and violation of labor rights). The AI systems are central to the harm because the workers' exploitation is directly tied to the AI training process. Hence, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

الذكاء الاصطناعي التوليدي... مجد تكنولوجي قائم على عرق الفقراء

2025-10-16
annahar.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (generative AI) and their development process, which depends on human data annotation labor. The harm described is to the health and labor rights of the workers involved in this process, including psychological distress and exploitation. Since the harm is directly linked to the use of AI systems and their development, this qualifies as an AI Incident under the definition of harm to persons and violations of labor rights caused by the AI system's development and use. The article does not describe a potential future harm or a governance response but reports ongoing harm to workers, making it an AI Incident.
Thumbnail Image

ظروف عمل سيئة لجنود الذكاء التوليدي المجهولين

2025-10-17
القدس العربي
Why's our monitor labelling this an incident or hazard?
The event involves the use of human labor to annotate data for training AI systems, which is essential for the development and functioning of generative AI. The harms described include labor rights violations, poor working conditions, psychological harm, and exploitation of workers, which constitute violations of labor rights and human rights. Since these harms have already occurred and are directly linked to the development and use of AI systems, this qualifies as an AI Incident under the framework, specifically under violations of human rights and labor rights (c).
Thumbnail Image

La invisible y precaria mano de obra detrás de la IA generativa

2025-10-16
Yahoo!
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (generative AI and machine learning models) that require human data annotation. The harm is psychological injury to the annotators caused by their work labeling graphic images for AI training. This is a direct harm linked to the use of AI systems. The article documents realized harm (psychological distress, poor labor conditions) rather than potential harm. Hence, it meets the criteria for an AI Incident under harm to health of a group of people resulting from AI system use.
Thumbnail Image

Gruelling, low-paid human work behind generative AI curtain

2025-10-16
ETTelecom.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (generative AI models) that require human-labeled data for training and feedback. The article documents realized harms to workers' health and labor rights due to the development and use of these AI systems, including exposure to traumatic content, poor pay, and lack of legal protections. These harms fall under violations of human rights and labor rights, which qualifies this as an AI Incident. The involvement of AI is explicit and central, and the harms are direct and ongoing.
Thumbnail Image

Behind generative AI curtain is gruelling, low-paid human work

2025-10-16
Dawn
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses the human labor behind AI training, highlighting mental health harms, poor working conditions, and labor rights violations caused by the development and use of AI systems. The harms include psychological injury and labor rights violations, which fall under the AI Incident definition. The AI systems involved are generative AI models requiring human-annotated data. The harms are direct consequences of the AI system's development and use. Hence, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Gruelling, low-paid human work behind generative AI curtain | Mint

2025-10-16
mint
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses the involvement of AI systems (generative AI) that require human-labeled data for training. The human workers suffer from poor working conditions, low wages, and psychological harm due to exposure to traumatic content, which constitutes injury or harm to groups of people (labor rights violations and mental health harm). The harms are realized and ongoing, not just potential. Therefore, this qualifies as an AI Incident because the development and use of AI systems have directly led to significant harm to workers involved in the AI supply chain.
Thumbnail Image

La invisible y precaria mano de obra detrás de la IA generativa

2025-10-16
France 24
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems as it discusses the human labor behind training AI models. The harms described include psychological harm (anxiety, depression, trauma from exposure to harmful content) and labor rights violations (low pay, lack of contracts, non-payment, poor working conditions). These harms are directly linked to the development and use of AI systems, fulfilling the criteria for an AI Incident. The article also mentions ongoing legal complaints and demands for better labor protections, reinforcing the presence of realized harm rather than just potential risk.
Thumbnail Image

La inteligencia artificial generativa y su costo invisible pagado con precaria mano de obra

2025-10-16
El Comercio Perú
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (generative AI, machine learning models) that require human data annotation. The harms described include psychological injury, labor rights violations, and poor working conditions directly linked to the AI systems' development and use. The presence of lawsuits and complaints confirms that these harms have materialized. Thus, the event meets the criteria for an AI Incident due to indirect harm caused by the AI system's development and use, specifically labor rights violations and health harms to data annotators.
Thumbnail Image

Gruelling, low-paid human work behind generative AI curtain

2025-10-18
The Star
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (generative AI) and the human labor required to train them. The harms described include mental health injury, poor working conditions, and labor rights violations, which fall under the definition of AI Incident (harm to health and violation of labor rights). The harms are directly caused by the use of AI systems requiring data annotation and content moderation. The presence of legal cases and worker advocacy further supports the classification as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Gruelling, low-paid human work behind generative AI curtain

2025-10-16
BNN
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (generative AI) that require human-labeled data for training. The harm described is to the health and well-being of the human workers (psychological harm, anxiety, depression) and violations of labor rights (low pay, precarious contracts, lack of protections). Since the harm is realized and directly linked to the development and use of AI systems, this qualifies as an AI Incident under the definitions provided. The article does not merely discuss potential harm or general AI developments but details actual harm caused by the AI system's development process.
Thumbnail Image

Los trabajadores detrás de la inteligencia artificial: "Es la esclavitud moderna"

2025-10-16
Montevideo Portal / Montevideo COMM
Why's our monitor labelling this an incident or hazard?
The event involves AI systems indirectly because the data annotators' work supports AI development. However, the harms described are labor exploitation and poor working conditions, which are not direct harms caused by the AI systems themselves but by the human labor practices around AI development. There is no direct or indirect harm caused by the AI system's malfunction or use to individuals or communities as defined in the AI Incident criteria. Nor does the article describe a plausible future harm from AI system malfunction or misuse. The article mainly provides contextual and societal information about the AI ecosystem and labor issues, fitting the definition of Complementary Information.
Thumbnail Image

Anotadores de datos, la invisible y precaria mano de obra detrás de la IA generativa * Semanario Universidad

2025-10-16
Semanario Universidad
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses the role of human data annotators in the development and use of AI systems, particularly generative AI. It details real harms experienced by these workers, including labor rights violations (e.g., unpaid work, lack of contracts, poor working conditions) and psychological harm from exposure to sensitive content. These constitute violations of labor rights and harm to health, which fall under the definition of AI Incident. Therefore, this event qualifies as an AI Incident due to the direct and ongoing harm caused by the development and use of AI systems relying on precarious human labor.
Thumbnail Image

La invisible y precaria mano de obra detrás de la IA generativa | El Nuevo Siglo

2025-10-16
EL NUEVO SIGLO
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (generative AI, content moderation AI) that require human data annotation labor. The harms described include labor rights violations, exploitation, and mental health harm to workers, which are direct harms linked to the AI system's development and use. The presence of lawsuits and complaints further confirms the realized harm. Thus, the event meets the criteria for an AI Incident under violations of labor rights and harm to people caused by AI system development and use.
Thumbnail Image

La IA se ha convertido en la esclavitud moderna, asegura estudio

2025-10-16
Tribuna Noticias
Why's our monitor labelling this an incident or hazard?
The article focuses on the socio-economic and labor conditions of people working to support AI systems, which relates to human rights and labor rights concerns. However, it does not report a specific event where the AI system's development or use has directly or indirectly caused harm to individuals or groups, nor does it describe a plausible future harm scenario. Therefore, it does not meet the criteria for an AI Incident or AI Hazard. The content is best classified as Complementary Information because it provides important context about the AI ecosystem and the human costs behind AI development, which informs understanding of AI impacts and governance.
Thumbnail Image

Gruelling, low-paid human work behind generative AI curtain - kuwaitTimes

2025-10-16
Kuwait Times
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems, specifically generative AI models, which require human-labeled data for training and evaluation. The harms described are direct and realized: workers suffer mental health issues due to exposure to disturbing content and poor working conditions, and there are legal complaints alleging labor rights violations such as misclassification of workers and inadequate pay. These harms fall under the definition of AI Incident as they are directly linked to the development and use of AI systems. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

La invisible y precaria mano de obra detrás de la IA generativa

2025-10-19
El Economista
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (generative AI and machine learning models) that require human data annotation. The workers' exposure to traumatic content without sufficient psychological support causes harm to their health (mental health issues such as anxiety and depression). Additionally, the low pay, lack of social protections, and unfair labor practices constitute violations of labor rights. These harms are directly linked to the use and development of AI systems, fulfilling the criteria for an AI Incident. The article does not merely discuss potential risks or general AI ecosystem developments but reports actual harm experienced by workers, including ongoing legal actions.