OpenAI's Use of Underpaid Kenyan Workers for ChatGPT Data Labeling Causes Harm

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

OpenAI outsourced the labeling of toxic and disturbing content for ChatGPT's training to Kenyan workers earning less than $2 per hour. Exposed to graphic material, these workers suffered psychological harm and poor working conditions, raising concerns about labor exploitation and the human cost of developing AI safety systems.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves the development and use of an AI system (ChatGPT) and its content filtering mechanisms, which rely on human labor for data labeling. The harm described is to the health and well-being of the Kenyan workers who were exposed to traumatic content during this process, constituting injury or harm to a group of people. This harm is indirectly linked to the AI system's development because the AI's training and filtering depend on this labor. Therefore, this qualifies as an AI Incident due to violation of labor rights and harm to health caused indirectly by the AI system's development process.[AI generated]
AI principles
AccountabilityFairnessHuman wellbeingRespect of human rightsSafetyTransparency & explainability

Industries
Real estateReal estate

Affected stakeholders
Workers

Harm types
PsychologicalEconomic/PropertyHuman or fundamental rightsReputational

Severity
AI incident

Business function:
Research and developmentMonitoring and quality control

AI system task:
Interaction support/chatbotsContent generation

In other databases

Articles about this incident or hazard

Thumbnail Image

OpenAI a fait appel à des travailleurs kenyans sous-payés pour parfaire ChatGPT

2023-01-19
Siècle Digital
Why's our monitor labelling this an incident or hazard?
The article focuses on the labor conditions of data labelers who contributed to training ChatGPT, an AI system. Although the workers were underpaid and faced precarious conditions, the event does not report harm caused by the AI system's use or malfunction. The AI system itself is not described as causing injury, rights violations, or other harms. Instead, the article sheds light on ethical and social issues related to AI development practices, which fits the definition of Complementary Information as it provides context and understanding of the AI ecosystem without reporting a new harm or risk.
Thumbnail Image

Microsoft-Backed ChatGPT Outsourced Content Moderation To Kenyan Workers At Below $2/Hour, Investigation FInds By Benzinga

2023-01-19
Investing.com UK
Why's our monitor labelling this an incident or hazard?
The article details the outsourcing of content moderation labor to low-paid workers who label toxic content to train or moderate ChatGPT. While there are concerns about worker mental health and pay, these are labor rights and ethical issues related to human workers, not harms caused by the AI system's outputs or failures. The AI system's involvement is in development, but no harm caused by the AI system itself is reported. Hence, this is Complementary Information providing context on AI development practices and labor conditions, not an AI Incident or Hazard.
Thumbnail Image

'That Was Torture;' OpenAI Reportedly Relied on Low-Paid Kenyan Laborers to Sift Through Horrific Content to Make ChatGPT Palatable

2023-01-18
Gizmodo
Why's our monitor labelling this an incident or hazard?
The event involves the development and use of an AI system (ChatGPT) and its content filtering mechanisms, which rely on human labor for data labeling. The harm described is to the health and well-being of the Kenyan workers who were exposed to traumatic content during this process, constituting injury or harm to a group of people. This harm is indirectly linked to the AI system's development because the AI's training and filtering depend on this labor. Therefore, this qualifies as an AI Incident due to violation of labor rights and harm to health caused indirectly by the AI system's development process.
Thumbnail Image

Kenyan data labellers were paid R34 an hour to label horrific content for ChatGPT creator OpenAI | Business Insider

2023-01-19
BusinessInsider
Why's our monitor labelling this an incident or hazard?
The article describes how Kenyan data labelers were employed to label horrific content to train AI systems for content moderation. The AI system's development and use required exposure to harmful content, which caused severe distress and mental health harm to the workers. This constitutes injury or harm to the health of a group of people (the labelers), directly linked to the AI system's development and use. Therefore, this qualifies as an AI Incident under the definition of harm to health caused directly or indirectly by the AI system's development and use.
Thumbnail Image

ChatGPT maker outsourced work to low-paid Kenyans - TIME -- RT World News

2023-01-19
RT
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) and its development process, specifically the data annotation by low-paid workers exposed to harmful content. The harm described is to the annotators' mental health and labor conditions, which is a human rights and labor rights issue linked to the AI system's development. Since the harm is realized and directly linked to the AI system's development, this qualifies as an AI Incident under violations of labor rights and harm to health of a group of people. The event is not merely complementary information or unrelated because it reports actual harm caused by the AI system's development process.
Thumbnail Image

To make an AI chat bot behave, Kenyan workers say they were 'mentally scarred' by graphic text

2023-01-20
pcgamer
Why's our monitor labelling this an incident or hazard?
An AI system (ChatGPT) is explicitly involved, as the labeling work was part of the AI system's development process to improve its safety and reduce harmful outputs. The harm described is to the mental health of the human workers who were exposed to graphic and disturbing content as part of their task. This harm is directly linked to the AI system's development process, as the labeling was necessary to train the AI to avoid inappropriate content. Therefore, this qualifies as an AI Incident because the AI system's development directly led to injury or harm to a group of people (the workers).
Thumbnail Image

ChatGPT's surprisingly human voice came with a human cost

2023-01-18
Mashable
Why's our monitor labelling this an incident or hazard?
The article explicitly describes how the development and use of ChatGPT involved human moderators who suffered psychological harm due to their work filtering and labeling harmful content. This harm is directly linked to the AI system's development process, fulfilling the criteria for an AI Incident under harm to health of persons. Although the harm is to human moderators rather than end users, it is a direct consequence of the AI system's creation and maintenance. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

ChatGPT's surprisingly human voice came with a human cost

2023-01-19
Mashable ME
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) and its development process, where human workers were exposed to harmful content to train and moderate the AI. This exposure caused psychological harm, which qualifies as injury or harm to health (a). The harm is indirectly linked to the AI system's development and use. Therefore, this qualifies as an AI Incident due to harm caused by the AI system's development practices. The article does not describe a potential future harm (hazard) or a governance response (complementary information), nor is it unrelated.
Thumbnail Image

ChatGPT's Surprisingly Human Voice Came With a Human Cost

2023-01-19
Mashable SEA
Why's our monitor labelling this an incident or hazard?
The article explicitly describes human harm (psychological trauma and exploitation) resulting from the use of human labor to moderate and label data for training ChatGPT, an AI system. Although the harm is not caused by the AI system malfunctioning or directly, it is an indirect consequence of the AI system's development and use. This fits the definition of an AI Incident because the AI system's development has directly or indirectly led to harm to a group of people (the workers).
Thumbnail Image

OpenAI paid Kenyan workers less than $2 an hour to make ChatGPT less toxic

2023-01-19
Metro
Why's our monitor labelling this an incident or hazard?
The article describes how Kenyan workers were paid very low wages to label graphic and harmful content to train AI systems like ChatGPT to detect and filter toxic content. The workers experienced severe distress and trauma, which constitutes harm to their health. This harm is directly linked to the AI system's development process, fulfilling the criteria for an AI Incident. The involvement of AI is explicit, as the labeled data was used to train AI models to detect harmful content. The harm is realized and not just potential, so it is not an AI Hazard. It is not merely complementary information because the main focus is on the harm caused by the AI system's development practices.
Thumbnail Image

Open AI underpaid 200 Kenyans to perfect ChatGPT then sacked them

2023-01-19
Quartz
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (OpenAI's GPT-3) and describes how the development process relied on human content moderators who were underpaid and subjected to harmful working conditions. This constitutes a violation of labor rights, which is a breach of obligations under applicable law protecting fundamental and labor rights. The harm is realized and directly linked to the AI system's development process. Hence, the event meets the criteria for an AI Incident.
Thumbnail Image

ChatGPT pays Kenyan workers $2 an hour to review obscene content | Boing Boing

2023-01-18
Boing Boing
Why's our monitor labelling this an incident or hazard?
The event describes human workers reviewing content related to ChatGPT's safety system, which is an AI system. The workers suffer psychological harm due to exposure to disturbing content, which is a form of injury to health. This harm is directly linked to the AI system's development and use, as the content is generated or filtered by the AI and requires human review. Therefore, this qualifies as an AI Incident due to injury to health caused indirectly by the AI system's use in content moderation.
Thumbnail Image

OpenAI Apparently Paid People in the Developing World $2/Hour to Read About Bestiality

2023-01-20
Futurism
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the use of AI systems for content moderation, involving human moderators who are paid shockingly low wages and subjected to traumatic content, leading to lasting psychological harm. This constitutes a violation of labor rights and fundamental human rights, directly linked to the AI system's development and use. The harm is realized and ongoing, not merely potential. Hence, it meets the criteria for an AI Incident under violations of human and labor rights.
Thumbnail Image

Building ChatGPT's AI content filters devastated workers' mental health, according to new report

2023-01-19
Popular Science
Why's our monitor labelling this an incident or hazard?
The article explicitly links the development of AI content filters for ChatGPT to the mental health harm (PTSD) experienced by human workers who labeled toxic content. The AI system's development process required these workers to review harmful material, causing direct injury to their mental health. This fits the definition of an AI Incident as the AI system's development directly led to harm to a group of people. The harm is realized and significant, and the AI system's role is pivotal in causing it. Therefore, the event is classified as an AI Incident.
Thumbnail Image

Kenyans used to teach ChatGPT to recognize offensive text

2023-01-20
TheRegister.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the involvement of an AI system (ChatGPT) and details how the development process required human workers to label offensive content to make the AI safer. The workers suffered psychological harm due to exposure to disturbing content, which is a form of injury or harm to health. This harm is directly linked to the AI system's development and use. Therefore, this qualifies as an AI Incident under the definition of injury or harm to a group of people caused indirectly by the AI system's development process.
Thumbnail Image

Kenyans paid $2 per hour to make ChatGPT less toxic - report

2023-01-19
The Star
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (ChatGPT) and its development process, specifically the human-in-the-loop work to reduce toxicity. The harm described is related to labor rights violations and exploitative working conditions, which fall under violations of labor rights. Since the AI system's development directly involves this exploitative labor practice, this constitutes an AI Incident due to violation of labor rights caused by the AI system's development process.
Thumbnail Image

ChatGPT paid Kenyan workers less than $2 to consume very graphic and toxic content

2023-01-19
Pulse Ghana
Why's our monitor labelling this an incident or hazard?
The event describes direct harm to human health (psychological trauma) caused by the development process of an AI system (ChatGPT). The workers were exposed to graphic and toxic content to label data for training AI filters, which led to traumatic effects. The AI system's development and use are central to the harm, fulfilling the criteria for an AI Incident under harm to health. The exploitative pay and traumatic working conditions further support the classification as an incident rather than a hazard or complementary information.
Thumbnail Image

Another Silicon Valley Firm Accused of Exploiting African Workers - Africa.com

2023-01-20
Africa.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the involvement of AI systems (OpenAI's GPT-3 language model) and the use of human labor to moderate and improve the AI outputs by removing racism, sexism, and violence. The harm described relates to labor rights violations, including underpayment, inhumane working conditions, and union busting, which are breaches of labor rights and human rights. Since these harms have occurred as a direct consequence of the AI system's development and use, this qualifies as an AI Incident under the framework's definition of violations of human rights or labor rights caused by AI system development or use.
Thumbnail Image

Kenyan data labelers were paid $2 an hour to label child sexual abuse, bestiality, and other horrific content for ChatGPT creator OpenAI, report says

2023-01-18
Business Insider Nederland
Why's our monitor labelling this an incident or hazard?
The article clearly describes an AI system (OpenAI's ChatGPT) being trained using labeled data that includes graphic and illegal content. The labeling was done by human workers exposed to traumatic material, causing mental health harm. This harm is directly linked to the AI system's development process. Additionally, the poor pay and working conditions suggest labor rights violations. These factors meet the criteria for an AI Incident, as the AI system's development has directly led to harm to people and breaches of labor rights.
Thumbnail Image

ChatGPTs surprisingly human voice came with a human cost (Mashable!)

2023-01-18
Tech Investor News
Why's our monitor labelling this an incident or hazard?
The article explicitly states that the Kenya-based data labeling team was subjected to psychologically scarring and mentally torturous working conditions due to their role in cleaning ChatGPT's training data from harmful content. This harm is directly linked to the AI system's development process, fulfilling the criteria for an AI Incident under harm to health of a group of people. The AI system's development caused direct harm to these workers, not just a potential or future risk, so it is not a hazard or complementary information.
Thumbnail Image

OpenAI Outsourced Data Labeling to Kenyan Workers Earning Less than $2 Per Hour: TIME Report

2023-01-20
Datanami
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (OpenAI's content filter for ChatGPT) whose development relied on human data labelers exposed to harmful content, leading to mental health harm. The harm is directly linked to the AI system's development process. Although the harm is to workers rather than end users, it fits the definition of an AI Incident as it involves injury or harm to a group of people caused by the AI system's development and use. The event does not describe a potential future harm (hazard) or a governance or research update (complementary information), nor is it unrelated to AI. Therefore, it is classified as an AI Incident.
Thumbnail Image

Times Reveals The Dark Side Of Training AI Chatbots - DailyAlts -

2023-01-20
DailyAlts
Why's our monitor labelling this an incident or hazard?
The article explicitly links the development and training of OpenAI's ChatGPT AI system to the mental health harm suffered by the Kenya-based labeling team. The harm is a direct consequence of the AI system's development process, as the workers were exposed to disturbing content to improve the AI's safety and performance. This meets the criteria for an AI Incident because it involves injury or harm to people caused indirectly by the AI system's development. The event is not merely a potential risk or a complementary update but a realized harm related to AI.
Thumbnail Image

OpenAI and labor exploitation in Kenya to improve ChatGPT

2023-01-18
Bullfrag
Why's our monitor labelling this an incident or hazard?
The AI system (ChatGPT and its toxicity filtering AI) is explicitly involved, as the labeling of toxic content is necessary for training and improving the AI. The event reveals that subcontracted workers in Kenya are paid extremely low wages and exposed to psychologically harmful content, leading to serious mental health issues. This is a direct harm to the health of these workers and a violation of labor rights, both of which fall under the definitions of AI Incident harms (a) and (c). The harm is directly linked to the AI system's development and use, as these workers' tasks are integral to the AI's safety mechanisms. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

ChatGPT was taught by the world's poorest people

2023-01-18
Metaverse Post
Why's our monitor labelling this an incident or hazard?
The event involves the development of an AI system (ChatGPT) that relied on human labor from poor regions to scan and label data for training. The workers experienced psychological harm due to exposure to hazardous content, which is a form of injury or harm to health. Since this harm is directly linked to the AI system's development process, it qualifies as an AI Incident under the definition of harm to people caused by AI system development.
Thumbnail Image

Kenyan data labelers were paid $2 an hour to label child sexual abuse, bestiality, and other horrific content for ChatGPT creator OpenAI, report says

2023-01-18
Business Insider
Why's our monitor labelling this an incident or hazard?
The event describes how AI system development involved outsourcing data labeling of horrific content to low-paid workers who suffered severe psychological distress. The AI system's development process directly led to harm to these workers, constituting a violation of labor rights and harm to health. This fits the definition of an AI Incident because the AI system's development caused direct harm to a group of people. Although the AI system itself did not malfunction or cause harm through its outputs, the development process caused significant harm, which is covered under the AI Incident definition. Hence, the classification is AI Incident.
Thumbnail Image

The human cost behind ChatGPT is worse than you think

2023-01-18
Windows Central
Why's our monitor labelling this an incident or hazard?
The event involves the development and use of AI systems (ChatGPT and its safety tools) that require human content moderation of graphic and illegal material to train AI filters. The psychological harm to the human moderators is a direct consequence of this AI system development process. This harm fits the definition of an AI Incident as it involves injury or harm to groups of people caused by the AI system's development and use. The article does not describe a potential future harm but actual realized harm to workers, so it is not an AI Hazard. It is not merely complementary information because the main focus is on the harm caused by the AI system's development process, not on responses or ecosystem context. Therefore, the classification is AI Incident.
Thumbnail Image

ChatGPT est-il devenu plus "éthique" grâce à l'exploitation de travailleurs kényans ?

2023-01-19
Journal du Geek
Why's our monitor labelling this an incident or hazard?
The event involves the development and use of an AI system (ChatGPT) and reveals that the process caused injury or harm to the health of a group of people (the Kenyan workers exposed to harmful content). The harm is directly linked to the AI system's development process, fulfilling the criteria for an AI Incident under harm category (a) injury or harm to health. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

ChatGPT : des employés kényans payés 2 dollars de l'heure dénoncent une " torture " psychologique

2023-01-19
Toms Guide : actualités high-tech et logiciels
Why's our monitor labelling this an incident or hazard?
The event describes human workers employed to curate and filter content to improve ChatGPT, an AI system. These workers suffered psychological harm due to exposure to disturbing content and were underpaid, violating labor rights and causing harm to health. The AI system's development and use directly led to these harms, meeting the criteria for an AI Incident under the framework.
Thumbnail Image

ChatGPT accusé d'avoir fait appel à des travailleurs kényans payés 2 dollars de l'heure pour modérer son système

2023-01-18
BFMTV
Why's our monitor labelling this an incident or hazard?
The event describes how OpenAI contracted a company to employ workers to moderate content for AI systems, exposing them to traumatic material and paying them very low wages, resulting in mental health harm. The AI system's development and use directly led to this harm through the content moderation process. Therefore, this qualifies as an AI Incident due to injury or harm to the health of a group of people caused indirectly by the AI system's development and use.
Thumbnail Image

ChatGPT : derrière la magie, des travailleurs kenyans payés une misère pour l'expurger des contenus violents

2023-01-19
01net
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (ChatGPT) and its development process, which relies on human workers to label harmful content. The workers suffer psychological harm due to exposure to violent and toxic material, which is a direct harm to their health. This harm is caused indirectly by the AI system's development and use, fulfilling the criteria for an AI Incident under harm category (a) injury or harm to health. The article does not merely describe potential or future harm, nor is it a general AI news or complementary information; it reports on actual harm caused by the AI system's development practices.
Thumbnail Image

ChatGPT : les modérateurs d'OpenAI étaient exploités et ne gagnaient pas plus de 2 $ par heure

2023-01-19
PhonAndroid
Why's our monitor labelling this an incident or hazard?
The article details poor labor conditions and exploitation of human moderators involved in labeling data for training ChatGPT. While this relates to the AI system's development, the harm is to workers' rights and mental health from employment practices, not from the AI system's malfunction, misuse, or outputs causing harm. There is no direct or indirect harm caused by the AI system itself to others, nor a plausible future harm from the AI system's operation. Thus, it does not meet the criteria for AI Incident or AI Hazard. Instead, it provides important contextual information about the AI development ecosystem and labor practices, fitting the definition of Complementary Information.
Thumbnail Image

OpenAI recourt à des travailleurs kenyans pour empêcher ses modèles de générer des contenus offensants

2023-01-20
ICTjournal - Le magazine suisse des technologies de l’information pour l’entreprise
Why's our monitor labelling this an incident or hazard?
The event involves the development and use of AI systems (content filtering algorithms for generative AI) and reveals harm to workers' mental health due to the nature of the content they must label. This constitutes harm to a group of people (workers) indirectly caused by the AI system's development process. The psychological harm and labor exploitation issues fall under violations of labor rights and harm to health, which are recognized AI Incident categories. Therefore, this event qualifies as an AI Incident due to the realized harm linked to the AI system's development and use.
Thumbnail Image

Pour rendre ChatGPT moins "toxique", OpenAI aurait fait appel à des travailleurs kényans payés 2 $ de l'heure

2023-01-19
Les Numériques
Why's our monitor labelling this an incident or hazard?
The event involves the development and use of an AI system (ChatGPT) where human moderators were employed to label harmful content to train the AI to be less toxic. The workers suffered mental health harm due to exposure to disturbing content, which is a form of injury or harm to a group of people. The AI system's development process directly led to this harm. Although the harm is to human moderators rather than end users, it fits the definition of an AI Incident because the AI system's development caused injury. The underpayment and poor working conditions further exacerbate the harm. Therefore, this is classified as an AI Incident.
Thumbnail Image

Derrière ChatGPT, le traumatisme de travailleurs sous-payés pour modérer l'IA

2023-01-19
Numerama.com
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (ChatGPT) and its development process, specifically the moderation of toxic content to make the AI safer. The harm is realized and direct: the mental health trauma experienced by the human moderators who had to label harmful content. This fits the definition of an AI Incident because the AI system's use (moderation to reduce toxicity) directly led to injury or harm to a group of people. The harm is not speculative or potential but has occurred and is documented through testimonies. Hence, the event is classified as an AI Incident.
Thumbnail Image

ChatGPT : Open AI accusé d'avoir fait appel à des travailleurs kenyans sous-payés

2023-01-21
Linfo.re
Why's our monitor labelling this an incident or hazard?
The event describes how OpenAI employed a third-party company to have Kenyan workers moderate harmful content to improve AI outputs. The workers were underpaid and suffered psychological trauma from exposure to disturbing content. Since the harm (psychological injury and labor rights violations) is directly linked to the AI system's development and use, this qualifies as an AI Incident under the framework, specifically harm to health and violation of labor rights.
Thumbnail Image

ChatGPT : des travailleurs sous-payés pour filtrer les propos de l'IA

2023-01-19
L'ADN
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the involvement of an AI system (ChatGPT) and the use of human labor to label data for its safety filters. The workers are exposed to harmful content leading to psychological trauma, which is a form of injury or harm to health (a). This harm is directly linked to the AI system's development and use, as the labeling is essential for the AI's content moderation. The underpayment and poor working conditions also represent a violation of labor rights (c). Therefore, this event meets the criteria for an AI Incident due to realized harm caused indirectly by the AI system's development process.
Thumbnail Image

Comment ChatGPT a traumatisé des employés chargés de sa modération

2023-01-19
L'Éclaireur Fnac
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (ChatGPT) and its moderation process, which required human workers to classify toxic content to make the AI less harmful. The mental health trauma experienced by these workers is a direct harm linked to the AI system's development and use. The harm is to the health of a group of people (moderators), fitting the definition of an AI Incident. Although the harm is indirect (through exposure to AI-generated or AI-related toxic content), it is clearly caused by the AI system's outputs and the moderation process. Therefore, this event is classified as an AI Incident.
Thumbnail Image

"C'était une torture" : OpenAI a payé des travailleurs kényans 2$ de l'heure pour mettre au point ChatGPT

2023-01-19
Business AM - Infos économiques et financières
Why's our monitor labelling this an incident or hazard?
The article explicitly links the development of ChatGPT's content filtering AI system to the traumatic work conditions experienced by the Kenyan data labelers. The harm is mental health injury to these workers, which falls under injury or harm to health of persons (a). The AI system's development process required this labeling, making the AI system's development a direct contributing factor to the harm. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information. The event is not unrelated because it clearly involves AI system development and related harm.
Thumbnail Image

Investigación expone el lado más turbio de ChatGPT y la industria de chatbots de IA | Digital Trends Español

2023-01-19
Digital Trends Español
Why's our monitor labelling this an incident or hazard?
The article focuses on the development process of an AI system (ChatGPT) and the ethical issues related to the labor conditions of human annotators exposed to disturbing content. There is no indication that the AI system itself caused harm or malfunctioned, nor that it could plausibly lead to harm. The harms are related to labor rights and mental health of workers, but these are consequences of human labor practices, not the AI system's outputs or use. Hence, the event is best classified as Complementary Information, providing important context and ethical considerations about AI development rather than reporting an AI Incident or Hazard.
Thumbnail Image

OpenAI subcontrató trabajadores en Kenia por menos de dos dólares la hora para supervisar ChatGPT

2023-01-21
LaPatilla.com
Why's our monitor labelling this an incident or hazard?
The event describes the use of an AI system (ChatGPT) that relies on human supervisors to filter harmful content. The workers' exploitation (low wages) and exposure to traumatic content represent harm to labor rights and health, directly linked to the AI system's development and use. This fits the definition of an AI Incident because the AI system's development and use have directly led to harm (labor rights violations and health harm) to a group of people (the subcontracted workers).
Thumbnail Image

Las precarias condiciones de trabajo que sustentan el éxito de ChatGPT

2023-01-21
LaVanguardia
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the involvement of an AI system (ChatGPT) and the subcontracted workers who labeled toxic content to improve the AI's safety. The workers experienced psychological harm (visions, described as torture) due to their work conditions, which is a direct injury to their health caused by the AI system's development process. This constitutes harm to persons (a) and a violation of labor rights (c). Therefore, this qualifies as an AI Incident because the AI system's development directly led to harm to workers' health and labor rights violations.
Thumbnail Image

OpenAI subcontrató trabajadores en Kenia por menos de $2 la hora para supervisar ChatGPT

2023-01-19
Gizmodo en Español
Why's our monitor labelling this an incident or hazard?
The article explicitly links the mental health harm suffered by Kenyan workers to their role in supervising and filtering content for ChatGPT, an AI system. The harm is a direct consequence of the AI system's development and use, as these workers are essential to creating the AI's content filters. This fits the definition of an AI Incident because it involves injury or harm to a group of people caused by the AI system's development and use. The involvement of the AI system is clear, and the harm is realized, not just potential.
Thumbnail Image

El lado oscuro de ChatGPT: trabajadores explotados en África que cobran 1 euro por hora

2023-01-19
El Español
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (ChatGPT and its safety moderation AI) and describes the development and use of this AI system through human data labeling. The harm is psychological injury to workers exposed to harmful content as part of training the AI system. This is a direct harm linked to the AI system's development and use. Therefore, it meets the criteria for an AI Incident under harm to health of persons (a).
Thumbnail Image

La cara oculta de ChatGPT: OpenAI empleó a kenianos por menos de 2 euros/hora para supervisar contenidos

2023-01-19
20 minutos
Why's our monitor labelling this an incident or hazard?
The article details the use of human data labelers for content moderation in ChatGPT, which is part of the AI system's development. While there are labor rights concerns and ethical issues regarding worker pay and conditions, the event does not describe a violation of rights caused by the AI system's outputs or use, nor does it describe harm caused by the AI system malfunction or misuse. The labor issues stem from human management and subcontracting practices rather than the AI system's operation or outputs. Therefore, this is best classified as Complementary Information providing context on ethical and labor concerns in AI development rather than an AI Incident or Hazard.
Thumbnail Image

OpenIA subcontrata trabajadores en Kenia para supervisar ChatGPT

2023-01-19
El Periódico
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) and the human moderation process necessary to make it safer. The harm described is psychological injury to the human workers who had to review disturbing content for long hours at low pay. This constitutes injury or harm to the health of a group of people (the moderators), directly linked to the AI system's use and development. Therefore, this qualifies as an AI Incident under the definition of harm to health caused directly or indirectly by the AI system's development and use.
Thumbnail Image

OpenAI aprovechó la explotación laboral en Kenia para mejorar ChatGPT: pagaban menos de US$2 la hora

2023-01-21
El Comercio Perú
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the involvement of an AI system (ChatGPT) and its development process relying on human data labelers in Kenya. These workers were paid less than $2 per hour and exposed to disturbing content, which constitutes exploitation and a violation of labor rights. Since the harm (exploitation and poor labor conditions) has already occurred and is directly linked to the AI system's development, this event meets the criteria for an AI Incident under violations of labor rights.
Thumbnail Image

OpenAi subcontrató a empleados keniatas por un euro la hora para testar la inteligencia artificial ChatGPT

2023-01-19
eldiario.es
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the involvement of an AI system (ChatGPT) and its development process requiring human reviewers to label toxic content. The harm described is to the mental health of these workers, caused by their exposure to disturbing AI-generated content during the training process. This is a direct harm linked to the AI system's development and use. Hence, it meets the criteria for an AI Incident under harm to health (a).
Thumbnail Image

Así se 'perfeccionó' ChatGPT: OpenAI empleó a trabajadores kenianos por 2 dólares la hora para etiquetar abusos sexuales a menores, zoofilia y otros contenidos horribles

2023-01-19
Business Insider
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (ChatGPT) and its development process, which required human labeling of harmful content to train the AI to filter such content. The labeling work exposed workers to traumatic material, causing psychological harm, which is a form of injury to health (harm category a). This harm is directly linked to the AI system's development and use. Additionally, the low wages and poor working conditions raise labor rights concerns (harm category c). Therefore, the event meets the criteria for an AI Incident due to direct harm caused by the AI system's development process to human workers.
Thumbnail Image

ChatGPT

2023-01-21
Globedia
Why's our monitor labelling this an incident or hazard?
The article focuses on the labor conditions and ethical implications of the data labeling process used to train and filter ChatGPT's outputs. While the AI system is involved in the development phase, the harm described is to the workers labeling data, not caused by the AI system's outputs or malfunction. There is no indication that the AI system itself caused injury, rights violations, or other harms through its use or malfunction. The event does not describe a plausible future harm from the AI system either. Thus, it does not meet the criteria for AI Incident or AI Hazard. Instead, it provides complementary information about the AI development ecosystem and labor practices, which is important for understanding broader AI impacts and governance.
Thumbnail Image

OpenAI mejoró ChatGPT pagando a trabajadores de Kenia menos de dos dólares la hora

2023-01-19
MuyComputerPRO
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (ChatGPT) and its development process, which required human labeling of toxic content to train an AI safety mechanism. The harm is to the workers who were paid very low wages and suffered mental health consequences from exposure to harmful content. This constitutes a violation of labor rights and harm to health, both recognized categories of AI Incident harm. The AI system's development directly led to these harms, fulfilling the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Detrás de ChatGPT, la inteligencia artificial de moda, hay trabajadores que ganan 2 dólares por hora

2023-01-20
es-us.finanzas.yahoo.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the involvement of AI systems (ChatGPT and related AI training tools) and the use of human data labelers to moderate content for training. The harm is realized and direct: workers suffer mental health issues due to exposure to disturbing content as part of the AI system's development process. This fits the definition of an AI Incident under harm to health of groups of people caused by the development or use of an AI system. The event is not merely a potential hazard or complementary information but a clear case of harm linked to AI system development.
Thumbnail Image

ChatGPT Leaves Employees Mentally Scarred for $2 an Hour

2023-01-25
The Tech Report
Why's our monitor labelling this an incident or hazard?
The event involves the development of an AI system (ChatGPT) that required human labeling of harmful content to train the AI to recognize and filter such content. The workers tasked with this labeling experienced mental harm, which constitutes injury or harm to a group of people. This harm is directly linked to the AI system's development process. Therefore, this qualifies as an AI Incident under the definition of harm to health caused by the development of an AI system.
Thumbnail Image

ChatGPT built with help of underpaid, exploited Kenyan workers, report alleges

2023-01-23
The Independent
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the involvement of an AI system (ChatGPT) and its development process relying on Kenyan workers who were underpaid and exposed to graphic, harmful content, leading to psychological harm. This constitutes harm to the health of a group of people (laborers) and violations of labor rights, both of which fall under the definition of AI Incident. The harm is directly linked to the AI system's development process, fulfilling the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Neocolonial slavery: ChatGPT built by using Kenyan workers as AI guinea pigs, Elon Musk knew

2023-01-26
Firstpost
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions that the development of ChatGPT involved Kenyan workers who were underpaid and exposed to extremely graphic and disturbing content, which caused serious mental health issues. This is a direct harm to the health of a group of people caused by the AI system's development process. The involvement of the AI system is clear (ChatGPT), and the harm is realized and significant. Hence, this event meets the criteria for an AI Incident under harm category (a) injury or harm to the health of a person or groups of people.
Thumbnail Image

Time for OpenAI to Open Source Toxicity Detection?

2023-01-25
Analytics India Magazine
Why's our monitor labelling this an incident or hazard?
The article primarily provides information about OpenAI's toxicity detection AI system, its training process, and community discussions about open sourcing and API access. It does not describe any incident where the AI system caused harm or malfunctioned, nor does it present a credible risk of future harm stemming from the system. Therefore, it does not meet the criteria for an AI Incident or AI Hazard. Instead, it offers contextual and governance-related information about AI development and deployment, fitting the definition of Complementary Information.
Thumbnail Image

Do You Know ChatGPT Was Taught By Worlds Poorest People? Report

2023-01-25
Coingape
Why's our monitor labelling this an incident or hazard?
The event involves the development and use of an AI system (ChatGPT) and highlights labor rights violations and poor working conditions for data labelers who contributed to the AI's training. This constitutes a violation of labor rights, which falls under harm category (c) in the AI Incident definition. The harm has already occurred as workers were paid low wages and experienced distress. Therefore, this qualifies as an AI Incident due to violations of human and labor rights linked to the AI system's development process.
Thumbnail Image

Kenyan workers making less than $2/hour helped make ChatGPT safe for public use

2023-01-24
MobileSyrup
Why's our monitor labelling this an incident or hazard?
The article explicitly links the development of ChatGPT's safety mechanisms to the use of outsourced human labor exposed to toxic and disturbing content, resulting in mental health harm. The AI system's development process directly led to harm to these workers, fulfilling the criteria for an AI Incident. The harm is not hypothetical or potential but realized, and the AI system's development is the cause. Although the workers are human labelers rather than the AI system itself, the harm is directly connected to the AI system's development and use, meeting the definition of an AI Incident.
Thumbnail Image

Do You Know ChatGPT Was Taught By Worlds Poorest People? Report

2023-01-25
TradingView
Why's our monitor labelling this an incident or hazard?
The article describes how the development of ChatGPT involved human data labelers from poor regions working under exploitative conditions, including low pay and mental distress from moderating graphic content. This constitutes a violation of labor rights and dignity, which is a recognized harm under the AI Incident definition. The AI system's development directly involved these workers, and the harm to their rights and well-being is a direct consequence of the AI system's creation process. Therefore, this qualifies as an AI Incident due to violations of labor rights and harm to workers involved in AI development.
Thumbnail Image

Chat GTP and the future of African AI - African Business

2023-01-27
African Business
Why's our monitor labelling this an incident or hazard?
The article does not report a direct or indirect AI Incident causing realized harm, nor does it describe a specific AI Hazard event with plausible imminent harm. Instead, it offers a broad critique and contextual background on AI's development, use, and socio-economic effects in Africa, including labor exploitation and cultural bias risks. These aspects align with Complementary Information as they provide important context, societal and governance considerations, and highlight ongoing challenges without focusing on a discrete incident or hazard event.
Thumbnail Image

Chuyên gia AI hàng đầu thế giới đang làm việc tại Meta nói gì khi ChatGPT tạo cơn sốt: Không hề đột phá hay mới mẻ, chỉ tận dụng nền tảng do bên khác phát triển

2023-01-25
Kenh14.vn
Why's our monitor labelling this an incident or hazard?
The article focuses on expert opinions and contextual background about ChatGPT and AI development, without describing any event where AI caused harm or could plausibly cause harm. There is no mention of injury, rights violations, disruption, or other harms linked to AI use or malfunction. The content is primarily informative and analytical, fitting the definition of Complementary Information as it enhances understanding of AI systems and their ecosystem without reporting a new incident or hazard.
Thumbnail Image

Bất ngờ đằng sau thành công của ChatGPT, công ty chủ quản thuê 'công nhân IT' châu Phi rẻ mạt hàng ngày tiếp xúc với ngôn từ, hình ảnh độc hại

2023-01-22
cafef.vn
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (ChatGPT) and its development process, specifically the labeling of harmful content to improve AI safety. The workers labeling this data suffer mental health harm due to exposure to toxic content, which is a form of injury or harm to health (a). This harm is directly linked to the AI system's development and use, as the labeling is necessary for the AI's content moderation capabilities. Hence, the event meets the criteria for an AI Incident because the AI system's development has indirectly led to harm to persons.
Thumbnail Image

Tham vọng đánh bại Google, Meta, Amazon: Microsoft 'dồn' 10 tỷ USD đầu tư vào công cụ 'vô hình' mới...hóa ra sa thải 10.000 nhân viên vì lý do này?

2023-01-24
cafef.vn
Why's our monitor labelling this an incident or hazard?
The article focuses on investment, strategic integration, and market competition involving AI systems like ChatGPT, which qualifies as an AI system. However, it does not describe any event where AI systems have caused or could plausibly cause harm. The mention of layoffs is linked to economic downturns, not AI system malfunction or misuse. Hence, the content fits the definition of Complementary Information, providing background and context on AI ecosystem developments and corporate responses rather than reporting an AI Incident or Hazard.