X's AI Recommends Explicit Content to UK Teens, Failing Safeguards

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A study by the Center for Countering Digital Hate found that X's AI-driven recommendation and search algorithms consistently exposed UK minors as young as 13 to explicit sexual content and enabled contact with adults. The platform's AI failed to enforce safeguards, directly harming children's safety and violating legal protections.[AI generated]

Why's our monitor labelling this an incident or hazard?

The platform's recommendation system and content moderation involve AI systems that generate outputs influencing user content exposure. The study shows that these AI systems have directly led to harm by exposing minors to explicit sexual content and unsolicited messages from adults, including sexually suggestive material and potential grooming. This is a clear violation of protections for minors and constitutes harm to health and safety. Therefore, the event qualifies as an AI Incident due to the direct role of AI in causing harm to vulnerable users.[AI generated]
AI principles
SafetyRespect of human rights

Industries
Media, social platforms, and marketing

Affected stakeholders
Children

Harm types
PsychologicalHuman or fundamental rights

Severity
AI incident

Business function:
Marketing and advertisement

AI system task:
Organisation/recommenders


Articles about this incident or hazard

Thumbnail Image

Study finds Elon Musk's X exposing young teens to pornographic content: Report

2026-04-15
Hindustan Times
Why's our monitor labelling this an incident or hazard?
The platform's recommendation system and content moderation involve AI systems that generate outputs influencing user content exposure. The study shows that these AI systems have directly led to harm by exposing minors to explicit sexual content and unsolicited messages from adults, including sexually suggestive material and potential grooming. This is a clear violation of protections for minors and constitutes harm to health and safety. Therefore, the event qualifies as an AI Incident due to the direct role of AI in causing harm to vulnerable users.
Thumbnail Image

Explicit content consistently recommended to 13-year-olds on X, report finds

2026-04-15
The Independent
Why's our monitor labelling this an incident or hazard?
The report explicitly describes how X's AI-powered recommendation algorithm and search functions expose minors to explicit sexual content and enable direct messaging from adults, which increases the risk of grooming and sexual exploitation. The AI system's outputs have directly led to harm to children by facilitating access to harmful content and unsafe interactions. This meets the criteria for an AI Incident because the AI system's use has directly caused harm to a vulnerable group and breaches legal protections.
Thumbnail Image

Teens as young as 13 exposed to pornographic content on Elon Musk's X: Study - CNBC TV18

2026-04-15
cnbctv18.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly: the recommendation algorithm and content moderation AI on X. The AI system's use has directly led to harm by exposing minors to explicit sexual content and enabling unsolicited inappropriate messages, which is a violation of protections for children and causes harm to their health and safety. The harm is realized and ongoing, not merely potential. Therefore, this qualifies as an AI Incident under the framework, as the AI system's malfunction or failure to adequately moderate content has directly led to harm to a vulnerable group (minors).
Thumbnail Image

X recommends explicit content to UK teens. Here's what every parent should know. -- Center for Countering Digital Hate | CCDH

2026-04-15
Center for Countering Digital Hate | CCDH
Why's our monitor labelling this an incident or hazard?
The platform X uses AI algorithms to recommend content and moderate messages. The report shows that these AI systems are actively recommending explicit sexual content to underage users and failing to block or flag it effectively, leading to direct harm to children through exposure to pornography and contact with adults. This is a direct harm to the health and safety of children (harm category a) and a violation of legal protections (category c). The AI system's malfunction or failure to enforce safeguards is central to the harm. Therefore, this event meets the criteria for an AI Incident.