
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
TikTok's AI-driven recommendation and data processing systems enabled ByteDance employees to spy on journalists, violating privacy and press freedoms. Additionally, a U.S. government experiment found TikTok's algorithm exposed users to Nazi content within 75 minutes, highlighting the platform's role in spreading harmful extremist material.[AI generated]
Why's our monitor labelling this an incident or hazard?
TikTok's recommendation algorithm is an AI system that infers user preferences and serves content accordingly. The experiment showed that without any user input, the AI system led to Nazi and extremist content within 75 minutes, demonstrating a direct link between the AI system's outputs and exposure to harmful content. This exposure can harm communities by spreading extremist ideologies and misinformation, which aligns with harm to communities under the AI Incident definition. The article also details TikTok's moderation challenges and delayed responses to extremist content, reinforcing the AI system's role in harm. Therefore, this event qualifies as an AI Incident due to the realized harm caused by the AI system's content recommendations.[AI generated]