
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
TikTok's AI-driven content moderation system removed over 81 million videos globally between April and June for violating community guidelines, including 27,000 for COVID-19 misinformation. Despite high accuracy, the system failed to prevent minors' exposure to pornographic content in Spain, highlighting ongoing challenges in protecting vulnerable users.[AI generated]
Why's our monitor labelling this an incident or hazard?
The event describes TikTok's use of automated detection technology (likely AI systems) to identify and remove videos spreading false COVID-19 information. The removal of misinformation directly relates to harm prevention in public health and community well-being. Since the AI system's use is central to the detection and removal process, and the misinformation being removed constitutes a harm (harm to communities and public health), this qualifies as an AI Incident. The event reports realized harm (misinformation spreading) and the AI system's role in mitigating it, not just potential harm or general AI news.[AI generated]