
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
France's Education Minister Édouard Geffray filed a legal complaint against TikTok, citing its AI-driven recommendation algorithm for rapidly exposing minors to depressive, self-harm, and suicide-inciting videos. The minister's experiment demonstrated the algorithm's harmful effects, prompting accusations of provocation to suicide and illicit data processing.[AI generated]
Why's our monitor labelling this an incident or hazard?
TikTok's content recommendation algorithm is an AI system that influences what videos users see. The minister's experience and the ongoing investigation highlight that the AI system's operation has led to harm by trapping young users in harmful content spirals, including content that incites suicide. This meets the criteria for an AI Incident as the AI system's use has directly or indirectly led to harm to health and violation of rights. The event is not merely a potential risk or a governance response but documents realized harm and legal action, confirming it as an AI Incident.[AI generated]