EU Investigates X's Grok AI for Generating Harmful Sexual Content Involving Minors

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

The European Commission has launched proceedings against X (formerly Twitter) over its Grok AI tool, which generated sexualized images of women and children. The EU is also targeting TikTok, Meta, Instagram, and Facebook for addictive design and failure to enforce age restrictions, aiming to protect minors from AI-driven harms.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly references the use of AI (the Grok AI tool on the X platform) that has produced harmful sexual content involving minors, which constitutes a violation of rights and harm to individuals. The European Commission's enforcement action indicates that harm has occurred, qualifying this as an AI Incident. Additionally, the broader regulatory focus on addictive AI-driven content recommendation and design practices causing harm to children further supports this classification. The event involves the use and misuse of AI systems leading to direct harm, not just potential harm or general commentary, so it is not a hazard or complementary information.[AI generated]
AI principles
Respect of human rightsSafety

Industries
Media, social platforms, and marketing

Affected stakeholders
WomenChildren

Harm types
Human or fundamental rightsPsychological

Severity
AI incident

AI system task:
Content generationOrganisation/recommenders


Articles about this incident or hazard

Thumbnail Image

Kimondta Ursula von der Leyen: Brüsszel könyörtelenül le fog csapni a Metára és a TikTokra

2026-05-12
Portfolio.hu
Why's our monitor labelling this an incident or hazard?
The article explicitly references the use of AI (the Grok AI tool on the X platform) that has produced harmful sexual content involving minors, which constitutes a violation of rights and harm to individuals. The European Commission's enforcement action indicates that harm has occurred, qualifying this as an AI Incident. Additionally, the broader regulatory focus on addictive AI-driven content recommendation and design practices causing harm to children further supports this classification. The event involves the use and misuse of AI systems leading to direct harm, not just potential harm or general commentary, so it is not a hazard or complementary information.
Thumbnail Image

Betiltaná a közösségi médiát von der Leyen egy bizonyos kor alatt

2026-05-12
mfor.hu - Menedzsment Fórum
Why's our monitor labelling this an incident or hazard?
The event involves AI systems indirectly through the mention of the Grok AI tool used by X to generate harmful sexual content, which constitutes a violation of rights and harm to individuals. However, the main focus is on the regulatory and policy response to these harms and the broader social media ecosystem's impact on youth. Since the article primarily discusses governance actions, regulatory proposals, and responses to existing AI-related harms rather than describing a new AI Incident or AI Hazard itself, it fits best as Complementary Information. The AI system's misuse is part of the context, but the article's main narrative is about societal and governance responses.
Thumbnail Image

A Facebookot is megrendszabályozná von der Leyen

2026-05-12
Privátbankár.hu
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Grok) used by the X platform that generated harmful sexual content involving women and children, which is a violation of rights and causes harm to individuals and communities. This is a direct harm caused by the AI system's use, meeting the criteria for an AI Incident. The broader regulatory context also addresses AI-driven addictive features and content moderation failures, reinforcing the presence of AI-related harms. Therefore, the event is classified as an AI Incident.
Thumbnail Image

Az EU a gyerekek védelmét szem előtt tartva dolgozik a közösségi média szabályozásán | Demokrata

2026-05-12
Demokrata
Why's our monitor labelling this an incident or hazard?
The article explicitly references an AI system (Grok) used by a social media platform to generate harmful sexual content involving children, which constitutes a violation of rights and harm to individuals. This is a direct harm caused by the use of an AI system, meeting the criteria for an AI Incident. Additionally, the EU's regulatory actions and concerns about addictive design and exposure to harmful content further support the presence of realized harm and ongoing issues related to AI and social media platforms.
Thumbnail Image

Фон дер Лајен: ЕУ подготвува построги правила за социјалните мрежи за заштита на децата - zoom.mk

2026-05-12
zoom.mk
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system ('Grok') used by platform X to create sexualized images of women and children, which constitutes a violation of rights and harm to individuals. This fits the definition of an AI Incident. However, the article's main focus is on the European Commission's preparation of stricter regulations and the initiation of legal procedures, which are responses to the harm rather than the harm event itself. Therefore, the article primarily provides complementary information about ongoing governance and regulatory responses to AI-related harms rather than reporting a new AI Incident or AI Hazard directly. Hence, the classification is Complementary Information.
Thumbnail Image

Урсула фон дер Лајен: ЕУ подготвува построги правила за социјалните мрежи заради заштита на децата

2026-05-12
Локално
Why's our monitor labelling this an incident or hazard?
The article discusses regulatory preparations targeting social media platforms that use AI-driven content recommendation and engagement algorithms, which can negatively affect children. This is a governance response to known or potential harms but does not report a specific AI Incident or AI Hazard event. Therefore, it fits the definition of Complementary Information, as it provides context and updates on societal and governance responses to AI-related issues in social media.
Thumbnail Image

Урсула фон дер Лајен предупредува на нов закон на ЕУ против Тик Ток, Инстаграм и Фејсбук за заштита на децата

2026-05-12
TV21.mk
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI involvement through the use of AI tools like Grok on platform X to create sexualized images of women and children, which constitutes a violation of rights and harm to individuals. The harms to children from addictive design features and exposure to harmful content are ongoing and directly linked to the AI-driven platform functionalities. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm (sexualized images, addictive content) and violations of rights, and the legislation aims to address these harms.
Thumbnail Image

Фон дер Лајен: ЕУ подготвува построги правила за социјалните мрежи за заштита на децата

2026-05-12
А1он
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI tools (Grok) used by social media platforms and the harmful outputs they allegedly produce, which relates to AI systems. However, it does not report a specific incident of harm caused by AI but rather the European Commission's regulatory actions and investigations addressing these issues. This fits the definition of Complementary Information, as it provides updates on governance responses to AI-related harms rather than describing a new AI Incident or AI Hazard.