TikTok Content Moderators Sue Over Mental Health Harm from Inadequate AI Safeguards

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

TikTok content moderators, including Candie Frazier, sued TikTok and its parent company, alleging mental health harm from prolonged exposure to violent and disturbing videos. The lawsuit claims TikTok failed to implement adequate AI-based safeguards to reduce exposure, leading to PTSD and other psychological injuries among moderators.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event clearly involves an AI system because TikTok employs AI technologies for video content moderation, and the lawsuit specifically mentions the lack of AI-based technical measures to reduce exposure to harmful content. The harm is mental health injury (post-traumatic stress disorder) suffered by the reviewer due to the nature and volume of content she had to review, which is directly linked to the AI-assisted content moderation process. The injury is a direct harm to a person caused by the use of the AI system and its operational environment. Hence, this is an AI Incident as per the definitions provided.[AI generated]
AI principles
SafetyRobustness & digital securityHuman wellbeingAccountability

Industries
Media, social platforms, and marketing

Affected stakeholders
Workers

Harm types
Psychological

Severity
AI incident

Business function:
Monitoring and quality control

AI system task:
Recognition/object detection


Articles about this incident or hazard

Thumbnail Image

视频审查员因观看大量违规视频导致抑郁:起诉TikTok要求赔偿

2021-12-28
驱动之家
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system because TikTok employs AI technologies for video content moderation, and the lawsuit specifically mentions the lack of AI-based technical measures to reduce exposure to harmful content. The harm is mental health injury (post-traumatic stress disorder) suffered by the reviewer due to the nature and volume of content she had to review, which is directly linked to the AI-assisted content moderation process. The injury is a direct harm to a person caused by the use of the AI system and its operational environment. Hence, this is an AI Incident as per the definitions provided.
Thumbnail Image

视频审查员因观看大量违规视频导致抑郁:起诉TikTok要求赔偿

2021-12-28
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The event describes a clear harm to a person's health (mental health issues) caused indirectly by the use or malfunction (or lack of adequate use) of AI systems for content moderation on TikTok. The lawsuit claims TikTok failed to implement AI-based technical safeguards to reduce the workload and exposure to harmful content, which is a failure in the AI system's use. This meets the definition of an AI Incident as the AI system's development or use has directly or indirectly led to harm to a person. The presence of AI is reasonably inferred from the context of content moderation and the mention of technical safeguards to handle videos intelligently.
Thumbnail Image

每日返工睇色情暴力影片致身心受損 TikTok審查員集體索償

2021-12-25
頭條日報 Headline Daily
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems for content moderation on TikTok, where AI flags videos for human review. The moderators' exposure to harmful content is a direct consequence of the AI system's use in the content review process. The moderators suffer mental health harm (PTSD, nightmares) due to this exposure, which is a clear injury to health. Therefore, this is an AI Incident as the AI system's use has directly led to harm to people.
Thumbnail Image

"تيك توك" يواجه اتهامات بتسببه بصدمات عقلية لمستخدميه

2021-12-25
صحيفة سبق الالكترونية
Why's our monitor labelling this an incident or hazard?
TikTok uses AI systems for content moderation and recommendation, which is reasonably inferred given the description of reviewing massive volumes of videos rapidly and the platform's known use of AI. The lawsuit alleges direct psychological harm (mental trauma, nightmares, difficulty sleeping) to content moderators and users due to exposure to harmful content that the AI system helps surface or fails to filter adequately. This harm fits the definition of injury or harm to health caused directly or indirectly by the AI system's use. Hence, this event qualifies as an AI Incident.
Thumbnail Image

الامارات | دعوى ضد "تيك توك": "يتسبب بصدمات عقلية لمستخدميه والعاملين فيه"

2021-12-25
الخليج 365
Why's our monitor labelling this an incident or hazard?
TikTok employs AI systems for content recommendation and moderation, which directly influence the exposure of users and moderators to harmful content. The lawsuit alleges that this exposure has caused mental health injuries to both users and content moderators. The harm is realized and directly linked to the AI system's operation in content curation and moderation. Hence, this qualifies as an AI Incident due to injury to persons caused indirectly by the AI system's use.