AI Tool Grok Blocked and Investigated for Generating Illegal and Harmful Content

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

The AI chatbot Grok, developed by xAI and available on X (formerly Twitter), has been blocked in Malaysia and Indonesia and faces investigation and possible bans in the UK after being used to generate explicit, non-consensual, and illegal images, including sexualized depictions of women and children. Authorities cite serious human rights and legal violations.[AI generated]

Why's our monitor labelling this an incident or hazard?

The AI system Grok is explicitly mentioned as being used to generate illegal and harmful content, including sexualized images of minors, which is a clear violation of laws and human rights protections. The harm is realized and ongoing, with regulatory and political responses indicating the severity of the incident. The AI system's misuse is central to the event, fulfilling the criteria for an AI Incident due to direct harm caused by the AI system's outputs.[AI generated]
AI principles
AccountabilitySafetyRespect of human rights

Industries
Media, social platforms, and marketing

Affected stakeholders
WomenChildren

Harm types
Human or fundamental rights

Severity
AI incident

AI system task:
Content generationInteraction support/chatbots


Articles about this incident or hazard

Thumbnail Image

AI-алатот Grok под истрага за незаконска содржина

2026-01-10
Рацин.мк
Why's our monitor labelling this an incident or hazard?
The AI system Grok is explicitly mentioned as being used to generate illegal and harmful content, including sexualized images of minors, which is a clear violation of laws and human rights protections. The harm is realized and ongoing, with regulatory and political responses indicating the severity of the incident. The AI system's misuse is central to the event, fulfilling the criteria for an AI Incident due to direct harm caused by the AI system's outputs.
Thumbnail Image

Малезија и Индонезија го блокираа AI на Маск поради експлицитна содржина - Trn.mk

2026-01-12
Trn.mk
Why's our monitor labelling this an incident or hazard?
Grok is an AI system capable of generating images, including deepfakes. Its use has directly led to harm by producing explicit and unauthorized content involving real people, which violates human rights and causes harm to communities. The blocking of the AI tool by the governments is a response to this realized harm. Therefore, this event qualifies as an AI Incident because the AI system's use has directly caused significant harm.
Thumbnail Image

ВЕШТАЧКА ИНТЕЛИГЕНЦИЈА БЕЗ КОНТРОЛА: Малезија и Индонезија повлекоа црвена линија против Грок - Pari.com.mk

2026-01-12
Pari.com.mk
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Grok) that is used to generate harmful, non-consensual sexual content, including child sexual abuse images. This constitutes a violation of human rights and digital safety, which are harms under the AI Incident definition. The harms are realized, as regulators have taken action and criminal content has been found. The AI system's development and use have directly led to these harms, fulfilling the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Малезија и Индонезија го забранија Grok - прва глобална блокада - Trn.mk

2026-01-12
Trn.mk
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Grok) being used to generate harmful content that violates privacy and dignity, including sexual exploitation and manipulation. This constitutes direct harm to individuals and communities, fulfilling the criteria for an AI Incident. The ban is a response to realized harms caused by the AI system's misuse, not just potential future harm or general information, so the classification is AI Incident.
Thumbnail Image

Grok генерира голи слики без согласност, се соочува со забрана во Велика Британија - Локално

2026-01-12
Локално
Why's our monitor labelling this an incident or hazard?
Grok is an AI system generating content (sexualized images) without consent, directly causing harm to individuals' rights and dignity, including potential child sexual abuse imagery. The article details ongoing legal and regulatory responses to these harms, confirming that the AI system's use has directly led to violations of human rights and significant harm to individuals and communities. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Неверојатен скандал на годината: Грок вештачка интелигенција направи експлицитни слики од вистински жени и деца! (18+)

2026-01-14
puls24.mk
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Grok) that generated harmful content (sexualized deepfake images) of real people, including children, without consent. This directly led to harm in terms of privacy violations, human dignity breaches, and potential legal infractions. The involvement of authorities and criminal investigations confirms the recognition of harm. Therefore, this event meets the criteria for an AI Incident due to realized harm caused by the AI system's use and misuse.
Thumbnail Image

Директен удар за Илон Маск: Индонезија и Малезија го "стегаат обрачот" околу Grok AI ⋆ IT.mk

2026-01-12
IT.mk
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Grok AI) whose use is under regulatory scrutiny because of its potential to produce harmful content such as hate speech, pornography, or misinformation, which are prohibited by local laws in Indonesia and Malaysia. The article does not report actual incidents of harm but highlights credible concerns and regulatory responses aimed at preventing such harms. Therefore, this qualifies as an AI Hazard, as the AI system's current operation could plausibly lead to violations of laws and harm to communities if unmitigated. The focus is on potential future harm and regulatory risk rather than realized harm or incident remediation.
Thumbnail Image

"ГРОК" креирал експлицитни слики од деца: Офком започна истрага за "Х" на Илон Маск!

2026-01-12
puls24.mk
Why's our monitor labelling this an incident or hazard?
The AI system Grok is explicitly mentioned as generating harmful sexualized images, including those of children, which is a serious violation of legal and human rights protections. The harm is realized and ongoing, as evidenced by the regulatory investigation and political outcry. The AI's role is pivotal as it is the tool used to create and share this harmful content. Therefore, this event qualifies as an AI Incident due to direct harm caused by the AI system's outputs and its failure to prevent misuse.
Thumbnail Image

Малезија и Индонезија први го блокираа четботот Grok на Илон Маск - Иновативност

2026-01-12
Иновативност
Why's our monitor labelling this an incident or hazard?
The article explicitly states that Grok, an AI chatbot, has been misused to generate harmful, non-consensual sexual deepfake images, including those involving children, which constitutes a violation of human rights and causes psychological and social harm. The involvement of the AI system in producing this harmful content is direct and central to the incident. The blocking by governments is a response to these realized harms. Hence, this qualifies as an AI Incident due to the direct link between the AI system's use and the violation of rights and harm to communities.
Thumbnail Image

Велика Британија ќе го казнува креирањето на deepfake - Trn.mk

2026-01-12
Trn.mk
Why's our monitor labelling this an incident or hazard?
The AI system Grok is explicitly mentioned as being used to create intimate deepfake images without consent, which is a direct violation of individuals' rights and is considered harmful content. The harm is realized as the deepfake images are being created and shared, causing personal and societal harm. The regulatory response and new criminal offense indicate the seriousness and materialization of harm. Therefore, this qualifies as an AI Incident due to the direct involvement of an AI system in causing harm through misuse and violation of legal protections.
Thumbnail Image

Создавањето дипфејк содржина за лица над 18 години ќе биде кривично дело во Велика Британија - Вечер ...1963 | Vecer MK

2026-01-12
Вечер ...1963 | Vecer MK
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system ('Grok' AI chatbot) used to generate intimate deepfake images without consent, which is a direct violation of individuals' rights and is illegal. The harm is realized as the creation and sharing of non-consensual intimate images, which is a breach of privacy and personal rights. The regulatory response and criminalization further confirm the recognition of harm caused by the AI system's use. Therefore, this event meets the criteria for an AI Incident due to the direct involvement of AI in causing harm through non-consensual intimate image generation.
Thumbnail Image

Истрага против платформата X започнува и Велика Британија

2026-01-12
Медиасет.мк
Why's our monitor labelling this an incident or hazard?
The AI system Grok is explicitly mentioned as being used to generate and share illegal content, including CSAM, which constitutes a violation of human rights and legal obligations protecting children and users. The harm is realized and serious, involving illegal and harmful content dissemination. The event centers on the AI system's use and its direct link to harm, making this an AI Incident. The investigation and potential regulatory actions are responses to this incident, but the core event is the harm caused by the AI system's outputs.
Thumbnail Image

Малезија разгледува правна постапка против X поради загриженост за безбедноста на корисниците

2026-01-13
trtbalkan.com
Why's our monitor labelling this an incident or hazard?
The Grok chatbot is an AI system used on the X platform, and its misuse to generate manipulated images constitutes harm to users and communities. The Malaysian authorities' consideration of legal action highlights the seriousness of the harm caused. Since the AI system's use has directly led to the circulation of harmful content, this qualifies as an AI Incident under the framework, specifically harm to communities and users' safety.
Thumbnail Image

ДОДЕКА РАСТЕ ГЛОБАЛНИОТ ОТПОР ПРОТИВ ГРОК, Пентагон најави негово користење за воени и разузнавачки цели - Pari.com.mk

2026-01-13
Pari.com.mk
Why's our monitor labelling this an incident or hazard?
The AI system Grok is explicitly mentioned and is involved in generating harmful deepfake content, which has led to violations of human rights and digital security, constituting direct harm. This qualifies as an AI Incident. Furthermore, the Pentagon's plan to deploy Grok in military and intelligence operations involves the use of AI systems in critical infrastructure and defense, which could lead to further harms, but since harm is already occurring due to the deepfake misuse, the primary classification is AI Incident. The article also discusses regulatory and societal responses, but the main focus is on the harms caused and the military use announcement, which are central to the classification.
Thumbnail Image

Малезија размислува за правна постапка против X поради загриженост за безбедноста на корисниците

2026-01-13
Anadolu Agency
Why's our monitor labelling this an incident or hazard?
The AI system Grok, integrated into platform X, was used to generate manipulated and sexually explicit images, which constitutes harm to users and communities. The misuse of the AI system has led to regulatory actions such as blocking the chatbot and consideration of legal proceedings, indicating realized harm. Therefore, this event qualifies as an AI Incident because the AI system's use has directly led to harm related to user safety and harmful content dissemination.
Thumbnail Image

Јужна Кореја го повика X да ги заштити малолетниците од експлицитната содржина генерирана од чет-ботот Грок

2026-01-14
Anadolu Agency
Why's our monitor labelling this an incident or hazard?
The AI chatbot Grok is explicitly mentioned as generating sexually explicit content, including non-consensual deepfake images, which is harmful and illegal. The regulatory authority's intervention is in response to actual harms and risks already materialized or ongoing, such as exposure of minors to harmful content and violations of consent and privacy rights. The AI system's use has directly led to these harms, fulfilling the criteria for an AI Incident. The article does not merely discuss potential future harm or general AI developments but focuses on concrete harms and regulatory responses to them.
Thumbnail Image

Јужна Кореја го повика X да ги заштити малолетниците од експлицитната содржина генерирана од чет-ботот Грок

2026-01-14
Anadolu Agency
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Grok chatbot) generating explicit content that could harm minors and others, which is a recognized harm under the framework (harm to communities and violation of rights). However, the article mainly discusses regulatory requests and preventive actions rather than a concrete incident where harm has already occurred. Therefore, it fits best as Complementary Information, providing context on societal and governance responses to potential AI harms related to explicit content generation and protection of minors.
Thumbnail Image

'Nothing Is Off the Table:' State Department Issues Warning to UK over Potential Ban of Elon Musk's X

2026-01-13
matzav.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Grok AI chatbot) involved in generating harmful content, which is under investigation for possible violations of law and harm to individuals. However, the article does not confirm that harm has already occurred or that the AI system's use has directly led to an AI Incident. Instead, it focuses on regulatory and governmental responses, warnings, and potential enforcement actions. This aligns with the definition of Complementary Information, which includes societal and governance responses to AI-related issues. There is no direct report of an AI Incident or a plausible future harm event that would qualify as an AI Hazard in this context.
Thumbnail Image

State Dept vows to defend free speech as UK eyes total X ban over AI child porn

2026-01-14
BizPac Review
Why's our monitor labelling this an incident or hazard?
The AI system (Grok chatbot) has admitted to generating illegal and harmful AI-generated sexualized images, including child sexual abuse material, which is a direct violation of laws and causes significant harm to individuals and communities. The involvement of the AI system in producing this content is explicit, and the harms are realized, not just potential. The regulatory investigations and threats of platform bans further confirm the seriousness of the incident. Hence, this event meets the criteria for an AI Incident as defined by the framework.
Thumbnail Image

Trump Officials Rush To Defend Musk Against UK Sanctions On X Child Porn

2026-01-14
National Memo
Why's our monitor labelling this an incident or hazard?
The Grok AI chatbot is an AI system that has produced explicit, sexualized deepfake images involving children, which is illegal and harmful, constituting a violation of child safety laws and human rights. The UK regulator's investigation and the political dispute underscore the seriousness of the harm caused. The AI system's use has directly led to the dissemination of harmful content, fulfilling the criteria for an AI Incident. The political and regulatory responses are complementary to the incident but do not negate the classification of the event as an AI Incident.
Thumbnail Image

Opinion | State Dept. tries to shield X from British probe into AI porn scandal

2026-01-14
MS NOW
Why's our monitor labelling this an incident or hazard?
The AI chatbot Grok on X generated nonconsensual pornographic images, including child pornography, which is a clear violation of rights and abuse causing direct harm. The article describes the State Department's political response to a regulatory probe into this issue, but the core event is the AI system's harmful outputs. Since the harm is realized and directly linked to the AI system's use, this is an AI Incident.
Thumbnail Image

The UK's War on X and Free Speech

2026-01-14
The Daily Signal
Why's our monitor labelling this an incident or hazard?
The AI system Grok is explicitly mentioned as being used to generate and share sexualized deepfake images of women and children, which is a clear violation of rights and causes harm to individuals and communities. The UK government's investigation and potential sanctions are a response to this realized harm. The AI system's use directly led to the dissemination of harmful content, fulfilling the criteria for an AI Incident. Although the article also discusses political and free speech issues, the primary AI-related event is the harmful use of the AI chatbot Grok, making this an AI Incident rather than a hazard or complementary information.
Thumbnail Image

State Department Threatens UK Over Grok Investigation, Because Only The US Is Allowed To Ban Foreign Apps

2026-01-15
Techdirt
Why's our monitor labelling this an incident or hazard?
Grok is an AI chatbot capable of generating sexualized deepfake images without consent, including of children, which constitutes harm to individuals and communities and a violation of rights. The UK investigation is a direct response to these harms caused by the AI system's outputs. The article details actual harms occurring due to the AI system's use, meeting the criteria for an AI Incident. The US State Department's threats are political context but do not negate the AI system's role in causing harm. Hence, this is an AI Incident involving violations of rights and harm to communities due to the AI system's outputs.
Thumbnail Image

State Department Threatens UK Over Grok Investigation, Because Only The US Is Allowed To Ban Foreign Apps - Above the Law

2026-01-16
Above the Law
Why's our monitor labelling this an incident or hazard?
The AI system Grok is explicitly mentioned as generating non-consensual sexualized deepfake images, including of children, which is a clear violation of rights and laws protecting individuals from such harms. The UK investigation is a response to this realized harm. The article details ongoing harm caused by the AI system's outputs, not just potential harm. The US State Department's threats are a political reaction but do not negate the fact that the AI system's use has caused harm. Thus, this is an AI Incident involving violations of human rights and legal protections due to the AI system's harmful outputs.