TikTok AI Recommender Exposes Irish Teens to Self-Harm and Suicide Content

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

An RTÉ Prime Time investigation found that TikTok's AI-driven recommendation system exposed accounts set as 13-year-olds to videos about self-harm and suicide within minutes. The incident has led to real harm, with over 140 children contacting support services about self-harm since February, raising concerns about the platform's impact on youth mental health.[AI generated]

Why's our monitor labelling this an incident or hazard?

The TikTok platform uses AI-based algorithms to recommend content to users. The investigation shows that these algorithms are leading vulnerable children into harmful content, which has resulted in real mental health harms such as self-harm and suicidal ideation. This constitutes indirect harm caused by the AI system's use, fulfilling the criteria for an AI Incident involving harm to health and communities. The letter calls for regulatory and legislative action to address the algorithmic amplification of dangerous content, highlighting the AI system's pivotal role in the harm.[AI generated]
AI principles
SafetyHuman wellbeingRespect of human rightsTransparency & explainabilityAccountabilityRobustness & digital securityFairness

Industries
Media, social platforms, and marketing

Affected stakeholders
Children

Harm types
PsychologicalHuman or fundamental rightsPublic interestReputational

Severity
AI incident

Business function:
Marketing and advertisementMonitoring and quality control

AI system task:
Organisation/recommenders


Articles about this incident or hazard

Thumbnail Image

TikTok and children

2024-04-17
The Irish Times
Why's our monitor labelling this an incident or hazard?
The TikTok platform uses AI-based algorithms to recommend content to users. The investigation shows that these algorithms are leading vulnerable children into harmful content, which has resulted in real mental health harms such as self-harm and suicidal ideation. This constitutes indirect harm caused by the AI system's use, fulfilling the criteria for an AI Incident involving harm to health and communities. The letter calls for regulatory and legislative action to address the algorithmic amplification of dangerous content, highlighting the AI system's pivotal role in the harm.
Thumbnail Image

RTÉ investigation into TikTok shows Irish teens being shown videos depicting self-harm and suicide

2024-04-16
Dundalk Democrat
Why's our monitor labelling this an incident or hazard?
TikTok's recommender system is an AI system that infers from user data to generate content recommendations. The investigation shows that this AI system's outputs have directly led to harm by exposing young teens to harmful content that promotes self-harm and suicidal thoughts, which is a clear injury to health and harm to a vulnerable group. The harm is realized and documented through expert testimony and statistical context. Therefore, this event qualifies as an AI Incident due to the direct link between the AI system's use and harm to health.
Thumbnail Image

13 on TikTok: Self-harm and suicide content shown shocks experts

2024-04-16
RTE.ie
Why's our monitor labelling this an incident or hazard?
The TikTok recommendation algorithm is an AI system that analyzes user behavior to suggest content. The experiment showed that for accounts identified as 13-year-olds, the AI system rapidly escalated exposure to harmful content related to self-harm and suicide. Experts confirmed the emotional and psychological harm caused by this content, indicating realized harm to health (mental health) of young users. The AI system's development and use directly led to this harm by promoting and amplifying such content. This meets the criteria for an AI Incident as the AI system's outputs have directly caused harm to a vulnerable group (adolescents).
Thumbnail Image

Self-harm and suicide TikTok content recommended to 13-year-olds within minutes of signing up, investigation finds

2024-04-16
Irish Independent
Why's our monitor labelling this an incident or hazard?
The TikTok recommendation algorithm is an AI system that infers user preferences and recommends content accordingly. The investigation shows that this AI system recommended harmful content related to self-harm and suicide to accounts set as 13 years old, which constitutes direct or indirect harm to the health of minors (harm category a) and harm to communities (d). The harm is realized as the content was actually recommended and viewed. TikTok's partial removal of some videos and regulatory scrutiny are complementary information but do not negate the incident. Therefore, this qualifies as an AI Incident due to the AI system's role in causing harm through its content recommendations to vulnerable users.
Thumbnail Image

Over 140 children have contacted Childline since February over self-harm | BreakingNews.ie

2024-04-17
Breaking News.ie
Why's our monitor labelling this an incident or hazard?
TikTok's content recommendation algorithms, which are AI systems, are amplifying harmful content related to self-harm and suicide, leading to real harm as children have reached out for help. The harm is to the health of children (mental health and suicide ideation), fitting the definition of an AI Incident. The event involves the use of AI systems (algorithmic amplification) causing indirect harm, meeting the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

RTE Prime Time experiment finds 'heinous' content suggested to teens on TikTok

2024-04-16
Irish mirror
Why's our monitor labelling this an incident or hazard?
TikTok's recommendation algorithm is an AI system that influences the content shown to users. The experiment demonstrated that this AI system, through its use, led to the exposure of harmful content to minors, which is linked to mental health harm and increased self-harm rates among teenagers. The harm is realized and significant, affecting the health of a vulnerable group. Therefore, this qualifies as an AI Incident due to indirect harm caused by the AI system's outputs.
Thumbnail Image

ISPCC says Prime Time report on dangers of TikTok for teenagers shocking but "no surprise"

2024-04-17
Kilkenny People
Why's our monitor labelling this an incident or hazard?
TikTok's content recommendation system is an AI system that influences what videos users see. The report shows that this AI system's use has directly led to harm to children's mental health, including self-harm and suicidal ideation, fulfilling the criteria for an AI Incident. The harm is realized and linked to the AI system's use, not just a potential risk. Therefore, this event qualifies as an AI Incident due to the direct harm caused by the AI system's outputs.