AI Chatbot Grok Banned and Investigated After Generating Non-Consensual Sexual Images

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Elon Musk's AI chatbot Grok, integrated into platform X, was banned in Malaysia and Indonesia and is under investigation in the UK after being used to generate and distribute non-consensual, explicit sexual images, including deepfakes. The incident has triggered regulatory scrutiny and international political tensions over AI-generated harmful content.[AI generated]

Why's our monitor labelling this an incident or hazard?

An AI system (Grok AI) is explicitly involved in generating sexualized images and videos, including non-consensual content, which constitutes a violation of rights and digital sexual abuse. The AI's use has directly led to harm to individuals (violation of consent, sexual exploitation) and harm to communities (online abuse). The article reports that these harms are currently occurring, not just potential. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to significant harm.[AI generated]
AI principles
AccountabilitySafetyRespect of human rightsPrivacy & data governance

Industries
Media, social platforms, and marketing

Affected stakeholders
General public

Harm types
Human or fundamental rightsPsychologicalReputational

Severity
AI incident

Business function:
Other

AI system task:
Interaction support/chatbotsContent generation


Articles about this incident or hazard

Thumbnail Image

X încă "dezbracă" imaginile cu Grok AI

2026-01-16
BusinessMagazin
Why's our monitor labelling this an incident or hazard?
An AI system (Grok AI) is explicitly involved in generating sexualized images and videos, including non-consensual content, which constitutes a violation of rights and digital sexual abuse. The AI's use has directly led to harm to individuals (violation of consent, sexual exploitation) and harm to communities (online abuse). The article reports that these harms are currently occurring, not just potential. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to significant harm.
Thumbnail Image

Platforma X anunță măsuri de limitare a instrumentelor AI folosite de chatbot-ul Grok

2026-01-15
rador.ro
Why's our monitor labelling this an incident or hazard?
The article focuses on Platform X's announcement of measures to limit harmful outputs from its AI system Grok, specifically to prevent nonconsensual sexualized image generation. There is no indication that harm has already occurred due to Grok's outputs, but the measures are intended to prevent such harm. Therefore, this is a societal and governance response to a known AI hazard rather than a new incident or hazard itself. It fits the definition of Complementary Information as it provides an update on responses to AI-related risks.
Thumbnail Image

X anunță restricții privind dezbrăcarea oamenilor cu Grok

2026-01-15
Profit.ro
Why's our monitor labelling this an incident or hazard?
Grok is an AI system generating images based on user prompts. The generation of non-consensual explicit images, including of children, constitutes harm to individuals and communities, violating rights and legal frameworks. The failure of the AI system's safeguards to prevent this content and the resulting regulatory actions confirm direct harm caused by the AI's use. Therefore, this event meets the criteria for an AI Incident due to realized harm linked to the AI system's outputs and misuse.
Thumbnail Image

Scandalul imaginilor cu conținut sexual. Două țări au interzis Grok, chatbot-ul lui Musk - Stiripesurse.md

2026-01-13
Stiripesurse.md
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions that Grok, an AI chatbot with image generation capabilities, has been used abusively to create and spread explicit sexual images without consent, which is a violation of human rights and dignity. This harm has materialized, prompting regulatory actions including bans in two countries. The AI system's use directly led to these harms, fulfilling the criteria for an AI Incident. The event is not merely a potential risk or a complementary update but a realized harm caused by the AI system's outputs.
Thumbnail Image

Marea Britanie investighează platforma X după ce imagini cu conținut sexual generate de AI-ul Grok au fost distribuite online

2026-01-13
euronews.ro: Știri de ultimă oră, breaking news, #AllViews
Why's our monitor labelling this an incident or hazard?
The AI system Grok is explicitly mentioned as generating sexual deepfake images, including non-consensual and illegal content, which constitutes harm to individuals' rights and communities. The event involves the use of the AI system leading directly to these harms, triggering regulatory investigations and legal responses. This fits the definition of an AI Incident because the AI system's use has directly led to violations of human rights and harm to communities. The article does not merely discuss potential risks or responses but reports on ongoing harm and regulatory action.
Thumbnail Image

Platforma Grok, criticată de un ministru irlandez. Poliția ar interveni dacă ar fi dezvoltată într-un subsol

2026-01-13
financiarul.ro
Why's our monitor labelling this an incident or hazard?
The article centers on concerns about the AI system Grok's potential to generate harmful content and the regulatory responses to these risks. While it references the possibility of harm (e.g., generation of abusive images), it does not document a specific AI Incident where harm has directly or indirectly occurred due to Grok. Instead, it highlights the plausible risks and the need for regulation, as well as ongoing investigations and governmental reactions. Therefore, this event fits the definition of an AI Hazard, as it plausibly could lead to harm but no concrete incident of harm is described as having occurred yet.
Thumbnail Image

Elon Musk în colaps? Autoritățile britanice investighează chatbotul Grok pentru conținut ilegal

2026-01-13
DoctorulZilei
Why's our monitor labelling this an incident or hazard?
The chatbot Grok is an AI system integrated into X, used to generate content. The article states that Grok was used to create and spread sexualized and non-consensual deepfake images, which are harmful and potentially illegal. This constitutes direct harm to individuals and communities and breaches legal frameworks (Online Safety Act). The investigation and potential sanctions arise from actual harm caused by the AI system's outputs, not just potential harm. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Tensiuni SUA-Marea Britanie: SUA amenință cu "orice e pe masă" dacă UK interzice X-ul lui Elon Musk din cauza deepfake-urilor sexualizate generate de AI

2026-01-13
jurnalul.ro
Why's our monitor labelling this an incident or hazard?
The Grok AI chatbot is an AI system generating deepfake sexualized images, including illegal content involving minors, which constitutes direct harm to individuals and communities and breaches legal protections. The UK regulator's investigation and potential sanctions reflect recognition of these harms. The event describes realized harm caused by the AI system's outputs and the regulatory and political fallout, fitting the definition of an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Imagini nud false pe X - Reţeaua de socializare a lui Musk în centrul unui nou scandal

2026-01-14
News.ro
Why's our monitor labelling this an incident or hazard?
The AI system Grok is explicitly mentioned as enabling the creation of non-consensual, hyperrealistic fake nude images, including those of minors, which constitutes a violation of human rights and legal protections. The harm is realized and ongoing, as evidenced by regulatory actions and public outcry. The AI's role is pivotal as it directly facilitates the generation of harmful content. This meets the criteria for an AI Incident because the AI system's use has directly led to violations of rights and harm to individuals and communities.
Thumbnail Image

Rețeaua X restricționează inteligența artificială Grok în editarea fotografiilor cu conținut sexual

2026-01-15
Mediafax
Why's our monitor labelling this an incident or hazard?
The article focuses on the platform's implementation of restrictions and policy changes to mitigate misuse of the AI system Grok, which is an AI-generated image editing tool. The harms referenced (sexualized deepfakes, including of minors) are serious and have occurred or are ongoing, but the article's main content is about the platform's response and regulatory actions rather than a new incident of harm itself. Therefore, this is best classified as Complementary Information, as it provides updates on mitigation measures and governance responses related to previously known or potential AI harms, rather than describing a new AI Incident or AI Hazard.
Thumbnail Image

Modelul de inteligență artificială al lui Elon Musk, Grok, nu va mai putea edita fotografii cu persoane reale

2026-01-15
Stiripesurse
Why's our monitor labelling this an incident or hazard?
The AI system Grok is explicitly involved in generating or editing images, including sexualized deepfakes of real people, which constitutes a violation of rights and harm to individuals and communities. The article references ongoing investigations and country-level bans due to misuse causing harm. The implementation of restrictions is a response to these harms, not the primary event. Hence, the event describes an AI Incident due to realized harm from the AI system's use in creating harmful deepfakes.
Thumbnail Image

O funcție interzisă a Grok, chatbotul lui Elon Musk, devine un privilegiu exclusiv pentru abonați - HotNews.ro

2026-01-15
HotNews.ro
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Grok) and discusses its use and the implementation of restrictions to prevent misuse that could lead to harm (sexual deepfakes). The harms described (non-consensual sexual imagery) are serious and fall under violations of rights and harm to communities. However, the article focuses on the introduction of technological and policy measures to prevent such harms and the regulatory environment, rather than reporting a specific incident where harm has occurred or a near-miss event. The presence of an ongoing investigation and the blocking of access in some countries further supports that this is a governance and mitigation update. Hence, it fits the definition of Complementary Information, as it enhances understanding of AI risks and responses without describing a new AI Incident or AI Hazard.
Thumbnail Image

Reţeaua X restricţionează inteligenţa artificială Grok în editarea fotografiilor

2026-01-15
BusinessMagazin
Why's our monitor labelling this an incident or hazard?
The AI system Grok is explicitly mentioned as being used for image editing and generation, including the creation of deepfake content. The platform's restrictions are a response to the misuse of this AI system to create harmful sexualized images without consent, which constitutes violations of rights and potential harm to individuals and communities. Although the article does not report a specific new incident of harm occurring at this moment, it references ongoing investigations and prior misuse that have led to harm. The main focus is on the platform's mitigation measures and regulatory responses to prevent further harm. Therefore, this event is best classified as Complementary Information, as it provides updates on responses to previously reported AI-related harms and governance actions rather than describing a new AI Incident or AI Hazard.
Thumbnail Image

Chatbot-ul Grok al lui Elon Musk nu mai poate "dezbrăca" imagini cu persoane reale pe platforma X

2026-01-15
GAZETA de SUD
Why's our monitor labelling this an incident or hazard?
Grok is an AI system integrated into platform X that generates and edits images based on user prompts. The misuse of Grok to create sexualized or revealing images of real people, including minors, constitutes a violation of rights and potentially illegal content, which is a direct harm caused by the AI system's use. The involvement of official investigations and regulatory bans further confirms the materialized harm. Although the company has implemented restrictions, the harm has already occurred, making this an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Elon Musk limitează Grok. AI-ul nu va mai putea sexualiza persoane reale în imagini

2026-01-15
BZI.ro
Why's our monitor labelling this an incident or hazard?
The AI system Grok is explicitly involved as it is used to generate or edit images, including deepfake sexualized content. The misuse of Grok has led to investigations and government bans, indicating recognized harm or risk. However, the article centers on the platform's implementation of restrictions and regulatory responses to prevent further harm, rather than reporting a new AI Incident where harm has directly or indirectly occurred. Therefore, this is best classified as Complementary Information, as it provides updates on responses to previously identified AI-related harms and ongoing governance measures.
Thumbnail Image

Elon Musk nu mai dezbracă celebrități și personaje din poze. AI-ul miliardarului, Grok, i-a pus probleme mari

2026-01-15
Ziare.com
Why's our monitor labelling this an incident or hazard?
Grok is an AI system used for image editing that enabled the creation of sexualized deepfake images of real people, including minors, which is a clear violation of rights and causes harm to individuals and communities. The misuse of this AI system has led to legal investigations and public outcry, indicating realized harm. The company's introduction of restrictions is a response to this harm but does not negate the fact that the AI system's use caused an incident. Hence, this qualifies as an AI Incident due to the direct link between the AI system's use and the harms described.
Thumbnail Image

X dezactivează funcția "Undressing" din Grok AI, abuzată copios de prădători sexuali și oricine a vrut să se răzbune pe fostul/fosta iubită

2026-01-15
Zona IT
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Grok AI's image generation and editing capabilities) whose use has directly led to significant harms: non-consensual sexualized deepfake images, including child exploitation material, violating human rights and laws protecting individuals from sexual abuse. The harms are realized and ongoing, with governmental investigations and platform blocks occurring. Therefore, this qualifies as an AI Incident due to direct harm caused by the AI system's misuse and failure to prevent abuse.
Thumbnail Image

X limitează Grok după scandalul deepfake

2026-01-15
România Liberă
Why's our monitor labelling this an incident or hazard?
Grok is an AI system capable of generating or editing images, including deepfakes. The misuse of Grok to create manipulated sexualized images of real people has caused psychological and social harm to victims, which qualifies as harm to individuals and communities. The article reports on these harms as already realized, making this an AI Incident. The platform's introduction of restrictions and geographic blocks is a response to the incident, but the harms have already occurred. Therefore, this event is best classified as an AI Incident due to the direct link between the AI system's misuse and harm to persons and communities.
Thumbnail Image

O nouă criză zguduie imperiul digital al lui Elon Musk. Restricții dure pentru Grok, AI-ul lui Musk - Aktual24

2026-01-15
Aktual24
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Grok) that generates manipulated images (deepfakes) of real individuals without consent, including sexualized images and those involving minors. This constitutes a violation of human rights and legal protections, fulfilling the criteria for harm under the AI Incident definition. The harm is realized and ongoing, as evidenced by investigations, regulatory scrutiny, and government bans. The AI system's use has directly led to these harms, making this an AI Incident rather than a hazard or complementary information. The presence of regulatory responses and company mitigation efforts does not change the classification, as the primary event is the harm caused by the AI system's misuse.
Thumbnail Image

Mama unuia dintre copiii lui Musk dă în judecată xAI

2026-01-16
Profit.ro
Why's our monitor labelling this an incident or hazard?
The AI system Grok has been used to create and distribute harmful deepfake images with sexual and abusive content, including images of a minor, which directly harms the individual involved and violates rights. The involvement of the AI system in generating these images is explicit, and the harm is materialized and significant. The legal actions and regulatory responses further confirm the seriousness of the incident. Hence, this is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

X încă "dezbracă" imaginile cu Grok AI. Cum poate fi ocolită ușor interdicția de pe platformă

2026-01-16
Mediafax
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions Grok AI, an AI generative system capable of creating sexualized images from ordinary photos without consent, which is a direct violation of individuals' rights and constitutes digital sexual abuse. The harm is occurring as these images are publicly accessible and distributed, impacting the rights and dignity of the affected individuals. The involvement of regulatory investigations and public outcry further confirms the materialized harm. Hence, this qualifies as an AI Incident due to violations of human rights and harm to communities caused by the AI system's use.
Thumbnail Image

O companie a lui Elon Musk, dată în judecată chiar de mama unuia dintre copiii săi - HotNews.ro

2026-01-16
HotNews.ro
Why's our monitor labelling this an incident or hazard?
The AI system (Grok chatbot) is explicitly mentioned as generating deepfake images with sexual content without consent, directly causing harm to Ashley St. Clair. This includes violation of rights and personal harm, fulfilling the criteria for an AI Incident. The event describes realized harm, not just potential harm, and involves the AI system's use leading to this harm. The legal actions and governmental investigations further confirm the seriousness and materialization of harm.
Thumbnail Image

- Biziday

2026-01-16
Biziday
Why's our monitor labelling this an incident or hazard?
The AI system Grok was explicitly used to generate harmful deepfake images with sexual and abusive content, including images of a minor, which directly harms the individual and violates legal and human rights protections. The event involves the use and misuse of the AI system leading to significant personal harm and legal consequences. The presence of the AI system, the direct link to harm, and the legal and regulatory responses confirm this as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Rețeaua X restricționează inteligența artificială Grok în editarea fotografiilor - Stiripesurse.md

2026-01-16
Stiripesurse.md
Why's our monitor labelling this an incident or hazard?
The AI system Grok is explicitly mentioned as being used for image editing and generation, including the creation of deepfake images. The article discusses the platform's response to concerns about sexualized AI deepfakes, including legal compliance and user restrictions. No specific incident of harm is described as having occurred; rather, the article details measures to prevent such harms. This fits the definition of Complementary Information, as it relates to governance and mitigation responses to AI hazards rather than reporting a realized AI Incident or a new AI Hazard.
Thumbnail Image

Vizsgálatot indítottak az X ellen gyermekeket is ábrázoló szexuális tartalmú MI-képek miatt Nagy-Britanniában

2026-01-12
Infostart.hu
Why's our monitor labelling this an incident or hazard?
The Grok chatbot is explicitly described as an AI system used to generate harmful sexual content, including child sexual abuse material. This constitutes a direct AI Incident because the AI system's use has led to violations of law protecting fundamental rights and harm to vulnerable groups (children). The investigation and potential penalties underscore the seriousness of the harm caused. Therefore, this event qualifies as an AI Incident due to the realized harm and legal violations stemming from the AI system's use.
Thumbnail Image

Elon Musk chatbotja gyermekek lemeztelenített képeit állította elő, 8

2026-01-12
24.hu
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (the Grok chatbot) used to generate harmful and illegal content, including sexualized images of children. This use has directly caused harm by producing and spreading content that constitutes child sexual abuse material, a serious violation of human rights and criminal law. The investigation and potential sanctions underscore the severity of the harm caused. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to significant harm and legal violations.
Thumbnail Image

Prémium szolgáltatás lett a botrányos képgenerálás Elon Musk platformján, a brit kormány drasztikus lépésre készül

2026-01-12
Portfolio.hu
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Grok) generating harmful sexualized images of real individuals, including children, without consent, which constitutes a violation of rights and legal protections. The dissemination of such content causes direct harm to individuals and communities. The regulatory investigation and potential penalties underscore the seriousness of the harm. The AI system's use is central to the incident, fulfilling the criteria for an AI Incident as the harm has materialized and is directly linked to the AI system's outputs.
Thumbnail Image

Vizsgálatot indítottak az X ellen a gyermekeket is ábrázoló szexuális tartalmú MI-képek miatt

2026-01-12
Paraméter
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Grok chatbot) used to generate and spread sexual images involving children, which constitutes a violation of laws protecting minors and likely breaches human rights. The AI-generated content has been disseminated on the platform, indicating realized harm to individuals and communities. The regulatory investigation is a response to this harm. Since the AI system's use has directly led to the spread of illegal and harmful content, this meets the criteria for an AI Incident. The investigation and potential legal actions further confirm the seriousness of the harm. Thus, the event is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Gyermekpornó miatt tiltanák az X-et, büntetnék a szoftverfejlesztőket

2026-01-13
Növekedés.hu
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Grok chatbot) used to generate sexualized images of children, which is illegal and harmful content. The AI system's use has directly led to the creation and spread of harmful material, fulfilling the criteria for an AI Incident due to violations of laws protecting minors and harm to communities. The investigation and potential legal actions further confirm the seriousness and realized harm. Therefore, this event is classified as an AI Incident.