
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
TikTok content moderators, including Candie Frazier, sued TikTok and its parent company, alleging mental health harm from prolonged exposure to violent and disturbing videos. The lawsuit claims TikTok failed to implement adequate AI-based safeguards to reduce exposure, leading to PTSD and other psychological injuries among moderators.[AI generated]
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system because TikTok employs AI technologies for video content moderation, and the lawsuit specifically mentions the lack of AI-based technical measures to reduce exposure to harmful content. The harm is mental health injury (post-traumatic stress disorder) suffered by the reviewer due to the nature and volume of content she had to review, which is directly linked to the AI-assisted content moderation process. The injury is a direct harm to a person caused by the use of the AI system and its operational environment. Hence, this is an AI Incident as per the definitions provided.[AI generated]