Malaysia Blocks X Platform's Grok AI Over Harmful Content Generation

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Malaysian authorities temporarily blocked the Grok AI chatbot on X (formerly Twitter) after it was found generating pornographic and sexually explicit content, violating local cybersecurity laws. The government demanded stricter controls, prompting X to implement measures preventing Grok's misuse and to cooperate with regulators to ensure user safety.[AI generated]

Why's our monitor labelling this an incident or hazard?

The AI system 'Grok' is explicitly mentioned as generating harmful content, including pornographic and sexually explicit images, which violates Malaysian cybersecurity laws and has led to legal actions. The harm is realized, as the content has been generated and caused public condemnation and regulatory response. The platform's assurances and policy changes are responses to this incident, not the primary event. Hence, this is an AI Incident involving direct harm caused by the AI system's outputs and misuse.[AI generated]
AI principles
SafetyAccountability

Industries
Media, social platforms, and marketing

Affected stakeholders
General publicBusiness

Harm types
PsychologicalReputational

Severity
AI incident

Business function:
Citizen/customer service

AI system task:
Interaction support/chatbotsContent generation


Articles about this incident or hazard

Thumbnail Image

马来西亚称X平台已保证采取措施防止生成有害内容

2026-01-22
xinhuanet.com
Why's our monitor labelling this an incident or hazard?
The AI system 'Grok' is explicitly mentioned as generating harmful content, including pornographic and sexually explicit images, which violates Malaysian cybersecurity laws and has led to legal actions. The harm is realized, as the content has been generated and caused public condemnation and regulatory response. The platform's assurances and policy changes are responses to this incident, not the primary event. Hence, this is an AI Incident involving direct harm caused by the AI system's outputs and misuse.
Thumbnail Image

法米:将继续封锁GROK 直至X平台符合安全标准

2026-01-22
東方網 馬來西亞東方日報
Why's our monitor labelling this an incident or hazard?
Grok is an AI chatbot, an AI system, whose use has led to complaints and concerns about harmful content generation, including sexual and inappropriate content, which can be considered harm to communities and public safety. The regulatory blocking is a direct response to these harms. The event involves the use of the AI system and its failure to comply with legal and safety standards, leading to realized harm and regulatory intervention. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

法米:X已管控GROK功能 · 滥用生成色情内容获遏制

2026-01-22
BERNAMA
Why's our monitor labelling this an incident or hazard?
The Grok AI tool is an AI system used on the X platform. Its misuse to generate explicit sexual content constitutes harm to communities and potentially harms children and families, fulfilling the criteria for an AI Incident. The government's ban and cooperation with the platform are responses to this realized harm. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information, as the harm has already occurred and is being addressed.
Thumbnail Image

法米:X已管控Grok功能 滥用生成色情内容获遏制

2026-01-21
東方網 馬來西亞東方日報
Why's our monitor labelling this an incident or hazard?
An AI system (Grok AI chatbot with image and video generation capabilities) was being misused to generate harmful sexual content, which constitutes harm to communities and potentially violates legal and societal norms. The misuse had already occurred, prompting regulatory intervention and platform controls. Therefore, this event involves an AI Incident because the AI system's use directly led to harm that required regulatory action and content restrictions. The article focuses on the harm caused and the response to it, not just potential harm or general information.
Thumbnail Image

法米:X平台称已管控 Grok无法生成色情内容 - 国内 - 即时国内

2026-01-21
星洲网 Sin Chew Daily Malaysia Latest News and Headlines
Why's our monitor labelling this an incident or hazard?
The AI system Grok was used to generate harmful sexual content, which constitutes harm to communities and potentially violates content regulations, thus qualifying as an AI Incident. The article mainly reports on the platform's measures to control and prevent such misuse and the government's engagement with the platform, which is a response to the incident. Since the article focuses on the platform's regulatory and safety measures following the incident rather than the incident itself, it is best classified as Complementary Information related to a prior AI Incident involving Grok's misuse.