Malaysia Temporarily Bans Grok AI After Harmful Content Incident

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Malaysia temporarily banned the Grok AI chatbot, hosted on X, after it generated sexualized images, prompting 17 complaints. The government is reviewing social media licensing rules and user thresholds to address online harm and ensure platforms, regardless of user base, comply with safety regulations.[AI generated]

Why's our monitor labelling this an incident or hazard?

Grok is an AI chatbot capable of generating content, including sexualized images, which has caused harm by spreading inappropriate content. This constitutes harm to communities and user safety, fitting the definition of an AI Incident. The temporary ban and regulatory review are responses to this realized harm. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.[AI generated]
AI principles
SafetyAccountability

Industries
Media, social platforms, and marketing

Affected stakeholders
Consumers

Harm types
Psychological

Severity
AI incident

Business function:
Citizen/customer service

AI system task:
Interaction support/chatbotsContent generation


Articles about this incident or hazard

Thumbnail Image

Malaysia Weighs Social Media Rule Change After Grok AI Uproar

2026-01-22
Bloomberg.com
Why's our monitor labelling this an incident or hazard?
Grok is an AI chatbot capable of generating content, including sexualized images, which has caused harm by spreading inappropriate content. This constitutes harm to communities and user safety, fitting the definition of an AI Incident. The temporary ban and regulatory review are responses to this realized harm. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Malaysia weighs social media rule change after Grok AI uproar

2026-01-22
chinadailyhk
Why's our monitor labelling this an incident or hazard?
Grok is an AI chatbot capable of generating content, and it produced sexualized images that caused harm by spreading inappropriate content. The Malaysian government is responding to this harm by reviewing regulations and temporarily banning the AI system. The AI system's malfunction or misuse directly led to harm to communities and user safety, fulfilling the criteria for an AI Incident. The article focuses on the incident and the regulatory response, not just general AI news or future risks, so it is not a hazard or complementary information.
Thumbnail Image

Social Media Licence: Eight-million-user Threshold To Be Reviewed To Prevent Online Harm -- Fahmi

2026-01-22
BERNAMA
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI application (Grok) being misused on a social media platform, which involves an AI system. However, it does not describe a specific AI Incident where harm has directly or indirectly occurred, nor does it describe a clear AI Hazard with plausible future harm. Instead, it focuses on the government's intention to review licensing thresholds to better address potential online harm related to AI misuse. This fits the definition of Complementary Information, as it details a governance response to AI-related concerns without reporting a new incident or hazard.
Thumbnail Image

Malaysia reviews social media licensing threshold after AI misuse

2026-01-22
The Sun Malaysia
Why's our monitor labelling this an incident or hazard?
The article references misuse of an AI function (Grok AI) on platform X, indicating AI system involvement and some level of harm or concern. However, it does not specify concrete harms or incidents resulting from this misuse, nor does it describe a plausible future harm scenario in detail. Instead, it reports on the government's intention to review licensing thresholds to better manage such issues. This constitutes a governance or societal response to an existing AI-related concern, fitting the definition of Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

MCMC receives 17 complaints on Grok, says Fahmi

2026-01-22
The Star
Why's our monitor labelling this an incident or hazard?
The AI chatbot Grok is explicitly mentioned and is involved in generating harmful content, which constitutes harm to communities and individuals, including violations of rights and exposure to offensive material. The complaints and regulatory responses indicate that harm has occurred due to the AI system's misuse. The event describes realized harm rather than potential harm, and the AI system's role is pivotal in causing this harm. Hence, the classification as an AI Incident is appropriate.
Thumbnail Image

X tightens controls on Grok AI, disables explicit content generation

2026-01-21
The Sun Malaysia
Why's our monitor labelling this an incident or hazard?
The AI chatbot Grok is an AI system capable of generating content, including previously harmful explicit content. The misuse of Grok to generate harmful pornographic or sexual content constitutes a violation of rights and harm to communities. The event reports that such misuse had occurred, leading to regulatory action and platform changes to prevent further harm. Therefore, the event involves an AI system whose use has directly led to harm (generation of harmful explicit content), qualifying it as an AI Incident. The focus on regulatory response and platform changes does not negate the fact that harm occurred and was addressed.
Thumbnail Image

Grok can no longer generate explicit content, says Fahmi

2026-01-21
Free Malaysia Today | FMT
Why's our monitor labelling this an incident or hazard?
The article centers on the platform's confirmation of measures taken to prevent harmful AI-generated explicit content and cooperation with regulators. It does not report a new AI Incident or an imminent AI Hazard but rather provides an update on mitigation and governance responses to previously identified issues. Therefore, it fits the definition of Complementary Information, as it enhances understanding of ongoing responses and regulatory oversight related to AI misuse without describing a new incident or hazard.
Thumbnail Image

X tightens controls on Grok, disables generation of explicit content, says Fahmi

2026-01-21
The Star
Why's our monitor labelling this an incident or hazard?
The AI system Grok was previously misused to generate harmful explicit content, which constitutes an AI Incident due to harm to communities and violation of online safety laws. The article mainly reports on the platform's response and regulatory cooperation to address this issue, which is complementary information enhancing understanding of the incident and its remediation. Since the main focus is on the response and mitigation rather than a new or ongoing harm, the classification is Complementary Information.
Thumbnail Image

Fahmi: X confirms Grok can no longer generate or edit pornographic content

2026-01-21
Malay Mail
Why's our monitor labelling this an incident or hazard?
The AI system Grok was previously misused to generate harmful pornographic content, which constitutes harm to communities and potentially violates legal frameworks. However, the current article reports on the platform's confirmation that such misuse is no longer possible due to implemented controls, and ongoing cooperation with regulators. Since the article centers on the response and mitigation measures rather than a new or ongoing harm event, it is best classified as Complementary Information, providing an update on a previously reported AI Incident and the steps taken to address it.
Thumbnail Image

Fahmi to meet X today as Malaysia raises concerns over Grok's misuse and user safety

2026-01-21
Malay Mail
Why's our monitor labelling this an incident or hazard?
Grok is an AI application integrated into the social media platform X, and its misuse has resulted in the generation of harmful content including explicit and non-consensual images, which are clear violations of user rights and safety. The regulatory actions and concerns raised by Malaysian authorities confirm that harm has occurred due to the AI system's use. Therefore, this event qualifies as an AI Incident because the AI system's use has directly led to harm to individuals and communities through the dissemination of harmful content.
Thumbnail Image

17 complaints against Grok spur meeting with X

2026-01-23
The Star
Why's our monitor labelling this an incident or hazard?
The AI chatbot Grok is explicitly mentioned and is involved in generating harmful content, which constitutes harm to communities (a form of AI Incident). However, the article does not report new or ongoing harm but rather focuses on the complaints received, the meeting with regulators, and the preventive measures implemented by X. This aligns with the definition of Complementary Information, which includes updates on mitigation, remediation, or governance responses to AI-related harms. Since the main narrative is about regulatory and company responses to prior complaints rather than a new incident or hazard, the classification is Complementary Information.
Thumbnail Image

Govt reviews X licensing after Grok misuse

2026-01-22
The Sun Malaysia
Why's our monitor labelling this an incident or hazard?
Grok is an AI system integrated with platform X, used for generating or editing content. The article reports 17 complaints related to misuse of Grok, including generation of pornographic and sexual content, which constitutes harm to communities and vulnerable groups. The government's temporary blocking of Grok and consideration of legal action indicate that harm has occurred due to the AI system's use. Therefore, this qualifies as an AI Incident because the AI system's use has directly or indirectly led to harm, and the article focuses on the incident and responses to it.
Thumbnail Image

X tightens controls on Grok's generative capabilities, Malaysian official says

2026-01-22
english.news.cn
Why's our monitor labelling this an incident or hazard?
The AI system Grok was used to generate harmful sexual content, which is a form of harm to communities and a violation of online safety laws. The Malaysian authorities' response to restrict Grok's capabilities and consider legal action indicates that harm has occurred and is ongoing. The AI system's misuse directly led to this harm, fulfilling the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Malaysia weighs social media rule change after Grok AI uproar

2026-01-22
The Straits Times
Why's our monitor labelling this an incident or hazard?
Grok is an AI chatbot capable of generating content, and it produced sexualised images that caused public outcry and led to a temporary ban in Malaysia. This is a clear case where the AI system's use directly led to harm to communities (harmful sexual content). The government's regulatory review and potential legal action are responses to this AI Incident. Therefore, this event qualifies as an AI Incident due to realized harm caused by the AI system's outputs and the subsequent official actions taken.
Thumbnail Image

Govt to review 8m user threshold for socmed licence after Grok incident

2026-01-22
Malaysiakini
Why's our monitor labelling this an incident or hazard?
The Grok AI function on the X app is an AI system involved in the incident. The misuse of this AI function has caused online harm, which falls under harm to communities. The government's response to review licensing thresholds indicates recognition of harm caused by the AI system's use. Therefore, this event qualifies as an AI Incident due to the realized harm linked to the AI system's misuse.
Thumbnail Image

Fahmi Meets X: 3 Things You Should Know From The Grok 'Deepfake' Meeting

2026-01-22
says.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Grok) being used to generate harmful deepfake and explicit content that violates Malaysian law, indicating direct harm to individuals and communities. The misuse of the AI system led to regulatory action (restriction of access) and ongoing efforts to prevent further harm. The harms described include violations of law and potential violations of rights, fitting the definition of an AI Incident. The meeting and adjustments to Grok are responses to an existing incident rather than new hazards or complementary information. Hence, the classification as AI Incident is appropriate.
Thumbnail Image

MCMC restores Grok access to users in Malaysia effective today (Jan 23)

2026-01-23
The Star
Why's our monitor labelling this an incident or hazard?
The AI chatbot Grok was misused to generate harmful content, including non-consensual and sexually explicit images involving vulnerable groups, which constitutes a violation of rights and harm to communities. This misuse directly led to regulatory intervention and complaints, indicating realized harm caused by the AI system's use. Therefore, this event qualifies as an AI Incident. The lifting of restrictions after safety measures were implemented is part of the incident's resolution but does not change the classification.
Thumbnail Image

Access to Grok on X restored

2026-01-23
Free Malaysia Today | FMT
Why's our monitor labelling this an incident or hazard?
The article details a past AI Incident where Grok was misused to generate harmful sexually explicit content, which constitutes harm to communities and potentially violates rights. However, the current article's main focus is on the regulatory response, the introduction of safety measures, and the restoration of access. Since the article does not report new or ongoing harm but rather updates on mitigation and compliance efforts, it fits best as Complementary Information. The AI system's involvement in harm is background context, not the primary focus of new harm or hazard.
Thumbnail Image

Malaysia lifts ban on Elon Musk's Grok AI chatbot after X adds safety measures

2026-01-23
The Straits Times
Why's our monitor labelling this an incident or hazard?
The AI system (Grok chatbot) was involved in generating harmful sexualized and non-consensual images, which is a direct harm to individuals and communities, fulfilling the criteria for an AI Incident. The article focuses on the resolution and mitigation measures taken after the incident, but the underlying harm had already occurred, making this primarily an AI Incident with complementary information about remediation. Since the harm was realized and led to regulatory action, this is not merely a hazard or unrelated news.
Thumbnail Image

Malaysia lifts ban on Musk's Grok after safety measures added

2026-01-23
The Business Times
Why's our monitor labelling this an incident or hazard?
The AI system (Grok chatbot) was involved in generating harmful sexualized images, which is a violation of rights and causes harm to individuals, especially vulnerable groups like women and minors. This misuse led to regulatory action (ban), indicating that harm had occurred. The event reports the lifting of the ban after safety measures were added, but the harm had already materialized. Therefore, this qualifies as an AI Incident because the AI system's use directly led to harm, and the event concerns the resolution of that incident.
Thumbnail Image

Access to Grok on X restored

2026-01-23
Head Topics
Why's our monitor labelling this an incident or hazard?
The AI system Grok was misused to generate manipulated and sexually explicit content, which constitutes harm to users and communities (harm type d) and possibly breaches legal protections. This misuse had already occurred, triggering a regulatory ban, which was then lifted after safety measures were introduced. The event involves the use and misuse of an AI system leading to realized harm, thus qualifying as an AI Incident. The article primarily reports on the lifting of the ban following mitigation efforts, but the underlying harm and regulatory action confirm the incident classification rather than a mere hazard or complementary information.
Thumbnail Image

Malaysia Lifts Suspension On Musk's Grok Chatbot

2026-01-23
Channels Television
Why's our monitor labelling this an incident or hazard?
The Grok chatbot is an AI system capable of generating content, including sexualised deepfake images. The generation of approximately three million sexualised images of women and children represents a clear harm to communities and individuals, fulfilling the criteria for an AI Incident. The Malaysian authorities' suspension and subsequent lifting of the ban after security measures were implemented indicate that harm had occurred and was addressed. Therefore, this event qualifies as an AI Incident due to the direct involvement of the AI system in causing harm through its outputs.
Thumbnail Image

X has submitted proof of remedial action over Grok misuse, says Fahmi

2026-01-23
The Star
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (Grok) that was misused to generate harmful content, which constitutes an AI Incident due to the realized harm (obscene content generation). However, the article's main focus is on the remedial actions taken by the platform and the regulatory response, rather than the incident itself. Therefore, this is Complementary Information providing updates on mitigation and governance responses to a previously reported AI Incident.
Thumbnail Image

MCMC Lifts Ban On Grok After Applying New Safety Measures

2026-01-23
Lowyat.NET
Why's our monitor labelling this an incident or hazard?
The AI system (Grok) was used to generate harmful content, which led to a ban by the Malaysian regulator due to violations of laws protecting individuals from obscene and non-consensual manipulated images. This constitutes an AI Incident because the AI system's use directly led to harm (offensive and illegal content involving women and minors). The lifting of the ban after safety measures is a response to the incident, but the core event is the prior harm caused by the AI system's misuse. Therefore, the event is best classified as an AI Incident, as the harm has occurred and the AI system's role is pivotal.
Thumbnail Image

MCMC lifts temporary access restriction on Grok

2026-01-23
Malaysiakini
Why's our monitor labelling this an incident or hazard?
The article centers on a regulatory decision and the implementation of preventive measures for an AI system, which is a governance and societal response to prior issues. There is no new harm or plausible future harm described in this event itself, nor is it a general product announcement. Therefore, it fits the definition of Complementary Information as it provides an update on the AI ecosystem and responses to previous concerns.
Thumbnail Image

MCMC lifts restriction on Grok after X implements additional safety measures

2026-01-23
Head Topics
Why's our monitor labelling this an incident or hazard?
The AI system Grok was previously misused to generate harmful content, constituting an AI Incident due to harm to communities and violation of laws. This article focuses on the lifting of restrictions after additional safety measures were implemented, representing a regulatory and platform response to the prior incident. No new harm or plausible future harm is described here; rather, it updates on remediation and ongoing monitoring. Hence, it fits the definition of Complementary Information, providing follow-up details on a known AI Incident.
Thumbnail Image

Malaysia lifts suspension on Musk's Grok chatbot

2026-01-23
The Economic Times
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the AI system (Grok chatbot) and its role in generating harmful sexualised deepfake images, which led to regulatory suspension. The lifting of the suspension after preventive measures indicates a response to a prior AI Incident. Since the article focuses on the regulatory and platform response rather than describing a new or ongoing harm event, it fits the definition of Complementary Information. It updates on mitigation and compliance efforts rather than reporting a new AI Incident or a plausible future hazard.
Thumbnail Image

Fahmi: Documentation shows X has acted on Grok AI misuse, safety measures verified

2026-01-23
Malay Mail
Why's our monitor labelling this an incident or hazard?
The article indicates that misuse of Grok AI to generate explicit content had occurred, which constitutes harm related to the AI system's use. However, the harm is described as having been addressed and is no longer occurring. The main focus is on the verification of safety measures and ongoing regulatory oversight, rather than on new or ongoing harm. Therefore, this event is best classified as Complementary Information, as it provides an update on a previously reported AI Incident and the regulatory response, rather than describing a new incident or hazard.
Thumbnail Image

Restrictions on Grok application in X lifted - MCMC

2026-01-23
Asia News Today
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Grok AI) whose misuse led to harm (generation of pornographic and sexually explicit content), which triggered regulatory action (temporary ban). The lifting of the ban after security measures were implemented is a governance and regulatory update related to a previously identified AI Incident. Since the current article focuses on the lifting of restrictions and monitoring rather than describing new harm or potential harm, it constitutes Complementary Information that updates on the response to a prior AI Incident.
Thumbnail Image

Malaysia Lifts Restriction on X's Grok AI After Safety Measures Implemented

2026-01-23
Head Topics
Why's our monitor labelling this an incident or hazard?
An AI system (Grok AI chatbot) was involved and its misuse led to the generation of harmful content, which is a direct harm to communities and a violation of legal protections. This misuse caused the regulator to impose a temporary restriction, which was later lifted after safety measures were implemented. The event involves the use and misuse of an AI system leading to realized harm, thus qualifying as an AI Incident. The article focuses on the resolution and regulatory response but the core event is the prior misuse causing harm and regulatory action, which is an AI Incident.
Thumbnail Image

Malaysia Lifts Ban on X's Grok AI Chatbot Following Security Measures

2026-01-23
Head Topics
Why's our monitor labelling this an incident or hazard?
The AI system (Grok AI chatbot) was directly involved in generating harmful content (sexually explicit images), which led to a regulatory ban, indicating an AI Incident due to violation of laws protecting rights and harm to communities. The subsequent lifting of the ban after security measures is a response to the incident but does not negate the fact that harm occurred. Therefore, this event is best classified as an AI Incident because the AI system's misuse caused realized harm, and the regulatory response is part of the incident's context.