TikTok's AI Algorithm Promotes Addictive and Potentially Harmful Content

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Investigations by The Wall Street Journal revealed that TikTok's AI-driven recommendation algorithm tracks user watch time to curate content feeds, quickly identifying user vulnerabilities. This mechanism has led to the promotion of addictive and extreme content, raising concerns about negative impacts on users' mental health and societal well-being.[AI generated]

Why's our monitor labelling this an incident or hazard?

The TikTok algorithm is an AI system that personalizes content feeds based on user data and behavior. The article provides clear evidence that the AI system's use has directly led to harm: a user developed an eating disorder after being exposed to harmful content promoted by the algorithm. Researchers confirm the algorithm's role in pushing harmful content, including self-harm promotion. The algorithm also exhibits biases and suppresses content from minority creators, causing violations of rights and harm to communities. These harms fall under injury to health, violations of rights, and harm to communities, all covered by the AI Incident definition. The harms are realized, not just potential, and the AI system's role is pivotal. Hence, this is an AI Incident.[AI generated]
AI principles
Human wellbeingSafetyTransparency & explainabilityPrivacy & data governanceDemocracy & human autonomy

Industries
Media, social platforms, and marketing

Affected stakeholders
Consumers

Harm types
PsychologicalPublic interest

Severity
AI incident

Business function:
Marketing and advertisement

AI system task:
Organisation/recommenders


Articles about this incident or hazard

Thumbnail Image

Lauren got TikTok for a laugh. It took her down a dangerous path

2021-07-25
Australian Broadcasting Corporation
Why's our monitor labelling this an incident or hazard?
The TikTok algorithm is an AI system that personalizes content feeds based on user data and behavior. The article provides clear evidence that the AI system's use has directly led to harm: a user developed an eating disorder after being exposed to harmful content promoted by the algorithm. Researchers confirm the algorithm's role in pushing harmful content, including self-harm promotion. The algorithm also exhibits biases and suppresses content from minority creators, causing violations of rights and harm to communities. These harms fall under injury to health, violations of rights, and harm to communities, all covered by the AI Incident definition. The harms are realized, not just potential, and the AI system's role is pivotal. Hence, this is an AI Incident.
Thumbnail Image

Comment l'algorithme TikTok pousse à l'extrême la bulle de filtres

2021-07-26
Le Figaro.fr
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as TikTok's recommendation algorithm, which uses complex data inputs and user behavior to generate personalized content feeds. The investigation shows that the algorithm pushes users toward more extreme or narrowly focused content, including videos about depression and mental health issues, which can harm users' psychological well-being. This constitutes indirect harm to health and communities caused by the AI system's use. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly or indirectly led to harm.