
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
Kenyan workers hired to label explicit and disturbing content for training OpenAI's ChatGPT suffered severe psychological harm, including trauma, anxiety, and depression. Their work, essential for making the AI safer, exposed them to graphic material, highlighting the human cost of developing AI content moderation systems.[AI generated]
Why's our monitor labelling this an incident or hazard?
The event involves the development and use of an AI system (OpenAI's GPT) where human annotators were exposed to harmful explicit content to train the AI's content moderation capabilities. This exposure caused real psychological harm (insomnia, anxiety, depression, panic attacks) to the workers, which qualifies as injury or harm to the health of persons (harm category a). The AI system's development and use directly led to this harm, making this an AI Incident under the provided definitions.[AI generated]