
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
TikTok's AI-driven recommendation system has promoted and amplified drug-related content to minors, exposing them to drug use and informal drug markets. Despite moderation efforts, users evade detection with coded language, leading to increased risks of addiction and drug-related harm among young users. The incident highlights the AI system's failure to prevent such harm.[AI generated]
Why's our monitor labelling this an incident or hazard?
The TikTok platform uses AI algorithms to recommend content to users, including minors. The article explains that these algorithms promote drug-related content under hashtags like #Pingtok, exposing young users to drug use and informal drug markets. This exposure has led to increased drug consumption risks among minors, which is a harm to health and communities. The AI system's content moderation struggles, due to users' use of coded language, further exacerbate the problem. Since the AI system's use and malfunction (inadequate content moderation) have indirectly led to harm (increased drug use risk and related health harms among minors), this qualifies as an AI Incident under the OECD framework.[AI generated]