TikTok's AI Moderation Removes Millions of Videos, Faces Challenges with Harmful Content

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

TikTok's AI-driven content moderation system removed over 81 million videos globally between April and June for violating community guidelines, including 27,000 for COVID-19 misinformation. Despite high accuracy, the system failed to prevent minors' exposure to pornographic content in Spain, highlighting ongoing challenges in protecting vulnerable users.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event describes TikTok's use of automated detection technology (likely AI systems) to identify and remove videos spreading false COVID-19 information. The removal of misinformation directly relates to harm prevention in public health and community well-being. Since the AI system's use is central to the detection and removal process, and the misinformation being removed constitutes a harm (harm to communities and public health), this qualifies as an AI Incident. The event reports realized harm (misinformation spreading) and the AI system's role in mitigating it, not just potential harm or general AI news.[AI generated]
AI principles
SafetyRobustness & digital securityRespect of human rights

Industries
Media, social platforms, and marketing

Affected stakeholders
Children

Harm types
Psychological

Severity
AI incident

Business function:
Monitoring and quality control

AI system task:
Recognition/object detection


Articles about this incident or hazard

Thumbnail Image

TikTok borra más de 20 mil videos por difundir 'fake news' sobre covid

2021-10-14
Milenio.com
Why's our monitor labelling this an incident or hazard?
The event describes TikTok's use of automated detection technology (likely AI systems) to identify and remove videos spreading false COVID-19 information. The removal of misinformation directly relates to harm prevention in public health and community well-being. Since the AI system's use is central to the detection and removal process, and the misinformation being removed constitutes a harm (harm to communities and public health), this qualifies as an AI Incident. The event reports realized harm (misinformation spreading) and the AI system's role in mitigating it, not just potential harm or general AI news.
Thumbnail Image

YouTube azul y YouTube naranja: así entran los menores a contenido pornográfico a través de TikTok

2021-10-15
La Razón
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions TikTok's use of algorithms (AI systems) to moderate and recommend content, which fails to adequately filter out inappropriate and harmful content for minors. This failure has led to minors accessing pornographic content and suffering sexual harassment, constituting harm to health and violation of rights. The AI system's malfunction or inadequate design in content moderation is a contributing factor to these harms. Hence, this qualifies as an AI Incident due to realized harm caused by the AI system's use.
Thumbnail Image

TikTok eliminó 81 millones de videos entre abril y junio por romper reglas de la comunidad

2021-10-13
TekCrispy
Why's our monitor labelling this an incident or hazard?
The automated content moderation system is an AI system as it performs sophisticated content analysis and decision-making to remove videos violating community rules. The system's use directly leads to the removal of harmful content, which prevents harm to users and communities, aligning with harm to communities under the AI Incident definition. Although some false positives occur, the system's role is pivotal in managing harmful content. Therefore, this event qualifies as an AI Incident because the AI system's use directly leads to harm prevention and management of harmful content, which is a significant articulated harm to communities.