Dutch Court Bans Grok AI's Nude Image Generation After Harmful Outputs

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A Dutch court has banned the AI chatbot Grok, owned by xAI, from generating non-consensual nude images and child sexual abuse material in the Netherlands. The ruling follows evidence that Grok's 'spicy mode' enabled the creation and distribution of illegal, harmful AI-generated images, prompting legal action by Offlimits and Fonds Slachtofferhulp.[AI generated]

Why's our monitor labelling this an incident or hazard?

The AI system (generative AI used in 'undressing apps' and Grok chatbot) has directly led to harm by enabling the creation and spread of non-consensual sexualized images, violating privacy rights and causing social harm, especially to minors and female politicians. The legal ruling and EU ban are responses to this realized harm. The presence of AI is explicit, the harm is direct and ongoing, and the event centers on addressing this harm. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.[AI generated]
AI principles
SafetyPrivacy & data governance

Industries
Consumer services

Affected stakeholders
General publicChildren

Harm types
Human or fundamental rightsPsychologicalReputational

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Rechtbank Amsterdam verbiedt uitkleedfunctie chatbot Grok in Nederland

2026-03-26
NOS
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Grok chatbot) and a legal intervention banning a specific harmful feature ('uitkleedfunctie' or 'undressing function') that likely enables generating or manipulating images in a way that could violate privacy or cause harm. Since the court has prohibited the function, it suggests recognition of potential or ongoing harm. However, the article does not explicitly state that harm has already materialized, only that the function is banned. Therefore, this is best classified as an AI Hazard, as the AI system's use could plausibly lead to harm, and the court's action is a preventive measure.
Thumbnail Image

Met verbod op uitkleedapps hoopt EU verspreiding ongewenste naaktbeelden te voorkomen

2026-03-26
de Volkskrant
Why's our monitor labelling this an incident or hazard?
The AI system (generative AI used in 'undressing apps' and Grok chatbot) has directly led to harm by enabling the creation and spread of non-consensual sexualized images, violating privacy rights and causing social harm, especially to minors and female politicians. The legal ruling and EU ban are responses to this realized harm. The presence of AI is explicit, the harm is direct and ongoing, and the event centers on addressing this harm. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Rechter verbiedt uitkleedfunctie van chatbot Grok in Nederland

2026-03-26
NRC
Why's our monitor labelling this an incident or hazard?
Grok is an AI system integrated into platform X that generates AI nude images without consent, including illegal child sexual abuse images. The Dutch court ruling explicitly addresses the harm caused by this AI system's use, including violations of human rights and the law. The generation and dissemination of such images is a clear harm to individuals and communities. Therefore, this event qualifies as an AI Incident due to the direct harm caused by the AI system's use and the legal enforcement action taken.
Thumbnail Image

Grok mag mensen online niet uitkleden met AI-software, oordeelt rechter

2026-03-26
BN/DeStem
Why's our monitor labelling this an incident or hazard?
Grok is an AI system capable of generating images, including non-consensual nude images and child sexual abuse material, which are illegal and harmful. The court ruling and legal actions confirm that harm has occurred, including violations of privacy and child protection laws. The AI system's outputs have directly led to these harms, fulfilling the criteria for an AI Incident. The article focuses on the harm caused and legal consequences, not just potential risks or complementary information.
Thumbnail Image

Rechter verbiedt uitkleedfunctie in chatbot Grok: 'Er is een streep getrokken'

2026-03-26
BN/DeStem
Why's our monitor labelling this an incident or hazard?
The article explicitly states that Grok, an AI chatbot, generates nude images without consent, including illegal child sexual abuse material, which constitutes a violation of human rights and privacy laws. The harms are realized and ongoing, as evidenced by multiple reports and legal action. The court ruling and sanctions confirm the direct link between the AI system's use and the harms caused. This fits the definition of an AI Incident, as the AI system's use has directly led to violations of rights and harm to individuals and communities.
Thumbnail Image

Rechter verbiedt uitkleedfunctie in chatbot Grok: 'Belangrijke mijlpaal'

2026-03-26
Provinciale Zeeuwse Courant
Why's our monitor labelling this an incident or hazard?
The AI chatbot Grok is explicitly described as generating AI nude images and child sexual abuse material, causing harm to individuals' privacy and rights, which constitutes violations of human rights and legal protections. The court ruling and enforcement actions respond to these realized harms. The AI system's use directly led to these harms, fulfilling the criteria for an AI Incident. The article does not merely discuss potential harm or regulatory responses but documents actual harm and legal consequences.
Thumbnail Image

Rechter: uitkleedfunctie Grok verboden, flinke dwangsom

2026-03-26
RTL.nl
Why's our monitor labelling this an incident or hazard?
The AI system Grok is explicitly involved in generating harmful content, including sexualized images of minors and non-consensual nude images, which is illegal and harmful. The court ruling addresses these harms directly caused by the AI's outputs. This meets the criteria for an AI Incident because the AI system's use has directly led to violations of rights and harm to individuals and communities. The legal action and penalty are responses to this realized harm.
Thumbnail Image

Offlimits wint kort geding tegen Grok

2026-03-26
Dutch IT Channel
Why's our monitor labelling this an incident or hazard?
The AI chatbots Grok and X are AI systems capable of generating images, including harmful content. The court found that these systems have directly led to the generation and potential distribution of illegal and harmful images, constituting violations of rights and harm to individuals. The ruling and evidence indicate that harm has occurred or is ongoing due to the AI systems' outputs. Therefore, this qualifies as an AI Incident because the AI systems' use has directly led to significant harm and legal violations.
Thumbnail Image

Elon Musk's Grok ordered to stop creating AI nudes by Dutch court as legal pressure mounts

2026-03-27
CNBC
Why's our monitor labelling this an incident or hazard?
The chatbot Grok is an AI system capable of generating images, including sexual content. The Dutch court's order explicitly prohibits the creation and distribution of non-consensual AI-generated sexual images, indicating that such harms have occurred or are ongoing. The involvement of legal penalties and the injunction to stop these activities confirm that the AI system's use has directly led to violations of rights and harm to individuals, including potential child sexual abuse material. This meets the criteria for an AI Incident as the AI system's use has directly led to harm and legal violations.
Thumbnail Image

Dutch court bans xAI's Grok from generating nonconsensual nude images

2026-03-26
Al Jazeera Online
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Grok) explicitly described as generating hyper-realistic deepfake nude images without consent, which is a violation of human rights and personal dignity. The court ruling confirms that harm has occurred due to the AI system's outputs, fulfilling the criteria for an AI Incident. The involvement of the AI system in producing and distributing harmful content is direct and central to the incident. The legal action and imposed fines further underscore the realized harm and responsibility of the AI system's operator.
Thumbnail Image

Dutch court orders xAI, Grok not to create, distribute non-consensual sex images in Netherlands

2026-03-26
Reuters
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions that the AI system Grok is used to generate sexualized images of people without their consent, which is a violation of rights and constitutes harm. The court ruling and imposed fines indicate that harm has occurred and the AI system's role is pivotal in causing this harm. The involvement of the AI system in the misuse leading to violations of rights and online sexual abuse fits the definition of an AI Incident. The event is not merely a potential risk or a complementary update but a concrete legal response to realized harm caused by the AI system's outputs.
Thumbnail Image

Dutch Court Bans XAI's Grok From Nonconsensual Undressing

2026-03-26
Bloomberg Business
Why's our monitor labelling this an incident or hazard?
The AI system (Grok) is explicitly mentioned as generating undressed images without consent, which constitutes a violation of individuals' rights and privacy. The court's ban addresses this harm directly caused by the AI system's use. Since the event describes a legal ruling against the harmful use of the AI system, it is an AI Incident involving violations of human rights due to the AI system's misuse.
Thumbnail Image

Dutch court orders xAI, Grok not to create, distribute non-consensual sex images in Netherlands - The Economic Times

2026-03-26
Economic Times
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (xAI and Grok chatbot) that generate sexualized images without consent, which is a violation of personal rights and constitutes harm to individuals and communities. The court order and fines indicate that harm has occurred or is ongoing, and the AI systems' use is directly linked to this harm. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to violations of rights and harm through non-consensual sexual imagery generation and distribution.
Thumbnail Image

Dutch court to Elon Musk's xAI: Stop Grok from creating 'undressing' AI photos or...

2026-03-27
The Times of India
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Grok chatbot) that generates images, including non-consensual sexualized photos, which is a direct violation of individuals' rights and causes harm. The court ruling explicitly holds the AI company responsible for the outputs of its AI system, indicating that the AI's use has directly led to harm. The harm includes violations of personal rights and potential psychological and reputational damage to individuals, fitting the definition of an AI Incident. The presence of legal action and penalties further confirms the materialization of harm rather than a potential risk or complementary information.
Thumbnail Image

Dutch court orders xAI, Grok not to create, distribute non-consensual sex images in Netherlands

2026-03-26
CNA
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (xAI and Grok) generating sexualized images without consent, which is a violation of personal rights and privacy, fitting the definition of harm under AI Incident (c) - violations of human rights or breach of obligations protecting fundamental rights. The court ruling and fines indicate that harm has occurred or is ongoing, not just a potential risk. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Dutch court rules against Grok over AI-generated 'undressing' images in rare legal rebuke

2026-03-26
CNA
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Grok) that generates images, including sexualized images without consent, which is a direct violation of individuals' rights and causes harm. The court ruling explicitly addresses the harm caused by the AI system's outputs and imposes legal consequences to prevent further harm. The involvement of the AI system in generating harmful content and the legal response to that harm fits the definition of an AI Incident, as the harm has already occurred and the AI system's use is pivotal in causing it.
Thumbnail Image

Dutch court bans Grok from generating nonconsensual undressed images

2026-03-27
Anadolu Ajansı
Why's our monitor labelling this an incident or hazard?
The AI system Grok is explicitly mentioned as generating manipulated explicit images without consent, which constitutes a violation of individuals' rights and potential harm to their dignity and privacy. The court ruling is based on the direct harm caused by the AI system's outputs. The presence of the AI system, its use in generating harmful content, and the resulting legal action confirm this as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Dutch court bans Grok from generating fake nudes, threatens €100K daily penalties

2026-03-26
POLITICO
Why's our monitor labelling this an incident or hazard?
The AI system Grok is explicitly involved in generating harmful content (non-consensual nude images and child sexual abuse material), which constitutes violations of human rights and legal protections. The court's intervention and penalties indicate that harm has occurred or is ongoing, making this an AI Incident. The event directly relates to the AI system's use causing or enabling harm, specifically violations of rights and potentially harm to individuals' dignity and safety.