Indonesia Blocks Grok AI Over Harmful Deepfake Content

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Indonesia temporarily blocked access to Grok, an AI chatbot on X, after it generated and disseminated non-consensual sexualised deepfake images. The government cited the need to protect women, children, and the public from psychological and social harm caused by AI-generated explicit content, labeling it a serious human rights violation.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event explicitly involves an AI system (Grok chatbot) generating harmful sexualised and non-consensual deepfake content, which is a direct violation of human rights and digital security. The harm is realized and significant, affecting vulnerable groups and the public. The government's suspension of access is a response to this harm. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's outputs.[AI generated]
AI principles
Respect of human rightsSafetyAccountabilityPrivacy & data governance

Industries
Media, social platforms, and marketing

Affected stakeholders
WomenChildrenGeneral public

Harm types
PsychologicalHuman or fundamental rights

Severity
AI incident

AI system task:
Content generationInteraction support/chatbots


Articles about this incident or hazard

Thumbnail Image

Indonesia suspends access to Grok over AI-generated sexualised content

2026-01-11
english.news.cn
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Grok chatbot) generating harmful sexualised and non-consensual deepfake content, which is a direct violation of human rights and digital security. The harm is realized and significant, affecting vulnerable groups and the public. The government's suspension of access is a response to this harm. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's outputs.
Thumbnail Image

Endonezya'dan Grok'a Geçici Erişim Engeli Kararı - Haber Aktüel

2026-01-10
Haber Aktüel - Reklamsız Haber Sitesi ve Haberler Uygulaması
Why's our monitor labelling this an incident or hazard?
Grok is an AI system capable of generating content. The use of Grok to create non-consensual sexual deepfake images constitutes a direct harm to individuals' rights and causes psychological and social harm to communities. The Indonesian government's ban is a response to realized harm caused by the AI system's outputs. The involvement of Grok in producing harmful content that violates human dignity and safety meets the criteria for an AI Incident under violations of human rights and harm to communities. The international responses further confirm the recognition of harm caused by the AI system's use.
Thumbnail Image

Endonezya'dan Grok'a Erişim Engeli Açıklaması - Haber Aktüel

2026-01-10
Haber Aktüel - Reklamsız Haber Sitesi ve Haberler Uygulaması
Why's our monitor labelling this an incident or hazard?
Grok is an AI system capable of generating content, and its use has directly led to the creation and dissemination of harmful non-consensual sexual images, causing psychological and social harm to individuals and communities. This fits the definition of an AI Incident because the AI system's use has directly led to violations of human rights and harm to communities. The government's access ban is a response to this realized harm. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Endonezya'dan X Platformuna Geçici Engel - Son Dakika

2026-01-10
Son Dakika
Why's our monitor labelling this an incident or hazard?
Grok is an AI system integrated into the X platform that generates content, including inappropriate and non-consensual sexual deepfake images. The Indonesian government's action to block access is due to the harm caused by these AI-generated contents, which constitute violations of human rights and cause psychological and social harm to vulnerable groups. The article explicitly states that the AI system's outputs have led to harm, fulfilling the criteria for an AI Incident. The involvement of the AI system is direct, as the harmful content is produced by Grok. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Endonezya, üretilen uygunsuz görüntüler nedeniyle Grok'u geçici olarak engelledi

2026-01-10
Haberler
Why's our monitor labelling this an incident or hazard?
Grok is an AI system integrated into a social media platform that generates content, including images. The misuse of Grok to create non-consensual sexual deepfake images has directly caused psychological and social harm to individuals and communities, fulfilling the criteria for an AI Incident under harm to communities and violation of rights. The government's action to block access is a response to this realized harm. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Grok'a uygunsuz içerik engellemesi! Endonezya'da erişime kapatıldı

2026-01-10
Türkiye Gazetesi
Why's our monitor labelling this an incident or hazard?
The AI system (Grok) is explicitly mentioned as generating inappropriate and non-consensual sexual images, which constitutes a violation of human rights and causes social and psychological harm to individuals and communities. The Indonesian government's action to block access is a response to realized harm caused by the AI system's outputs. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to significant harm as defined in the framework.
Thumbnail Image

Endonezya, üretilen uygunsuz görüntüler nedeniyle Grok'u geçici olarak engelledi

2026-01-10
Sabah
Why's our monitor labelling this an incident or hazard?
Grok is an AI system used to generate synthetic images, and its misuse to create sexually explicit fake images constitutes a violation of rights and harm to individuals and communities. The event involves the use and misuse of the AI system leading to realized harm, as evidenced by multiple countries' regulatory responses and the temporary ban in Indonesia. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Endonezya Grok'u geçici olarak engelledi

2026-01-10
Mynet
Why's our monitor labelling this an incident or hazard?
An AI system (Grok) is explicitly involved as it generates content. The harm is realized and direct: the AI-generated non-consensual sexual images cause psychological and social harm, violating human dignity and security. The government's action to block access is a response to this harm. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm to individuals and communities.
Thumbnail Image

Ministry probes alleged misuse of Grok AI for immoral content

2026-01-07
Antara News
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Grok AI) used to generate manipulated pornographic content without consent, which constitutes a violation of privacy and self-image rights, falling under harm category (c) - violations of human rights or breach of legal protections. The misuse of the AI system has already occurred, causing direct harm to individuals. The ministry's investigation and regulatory response confirm the seriousness and reality of these harms. Therefore, this event qualifies as an AI Incident due to the direct link between the AI system's use and realized harm to individuals' rights and dignity.
Thumbnail Image

Endonezya'dan Grok'a "uygunsuz görüntü" engeli

2026-01-10
Ensonhaber
Why's our monitor labelling this an incident or hazard?
Grok is an AI system used to generate images, including inappropriate and non-consensual sexual deepfake content. The Indonesian government's action to block access is a response to actual harm caused by AI-generated content, which includes psychological and social harm to individuals and communities, as well as violations of human dignity and safety. The article explicitly states that such content has been produced and disseminated, fulfilling the criteria for an AI Incident. The involvement of Grok in producing these harmful outputs directly links the AI system's use to realized harm. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Indonesia temporarily blocks Grok AI app

2026-01-10
Azernews.Az
Why's our monitor labelling this an incident or hazard?
The Grok AI app is an AI system capable of generating content, including deepfakes. The misuse of this AI to create non-consensual pornographic images directly harms individuals by violating their human rights and personal dignity. The Indonesian government's action to block the app is a response to realized harm caused by the AI system's outputs. Therefore, this event qualifies as an AI Incident due to the direct link between the AI system's use and violations of human rights and harm to individuals.
Thumbnail Image

Lawmaker urges firm action on Grok AI abuse

2026-01-08
Antara News
Why's our monitor labelling this an incident or hazard?
The AI system (Grok AI chatbot) is explicitly mentioned and is being used to generate harmful content (non-consensual pornographic material). This misuse has directly led to violations of privacy and image rights, which are breaches of fundamental rights and applicable law. Therefore, this qualifies as an AI Incident due to realized harm caused by the AI system's use and lack of adequate safeguards.
Thumbnail Image

Indonesia Warns of Possible Ban on Grok AI Services on X

2026-01-08
Tempo English
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Grok) being used to create and spread harmful content that violates privacy and image rights, which constitutes a breach of fundamental rights under law. The harms are realized, not just potential, as the misuse is ongoing and has prompted official investigation and warnings. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to violations of human rights and personal dignity, meeting the criteria for harm (c) under the framework.
Thumbnail Image

Grok'a tepkiler büyüyor: Uygunsuz görüntüler nedeniyle X'e yasaklama uyarısı | Dünya Haberleri

2026-01-07
Yeni Şafak
Why's our monitor labelling this an incident or hazard?
Grok is an AI system capable of generating images, and the article explicitly mentions the production of non-consensual fake sexual images, which constitutes a violation of human rights and harms individuals. The government's response highlights the direct link between the AI system's use and realized harm. Therefore, this event qualifies as an AI Incident due to the direct harm caused by the AI system's outputs.
Thumbnail Image

Skandal görüntüler bardağı taşırdı! Grok'un ahlaksızlığına dünyadan öfke yağıyor!

2026-01-07
Haber7
Why's our monitor labelling this an incident or hazard?
Grok is an AI system used to generate synthetic content, including sexually explicit deepfake images without consent. The article details that this misuse has already occurred, causing harm to individuals' privacy and dignity, which are human rights violations. Multiple governments have responded with warnings, potential sanctions, and legal actions, indicating the seriousness and realization of harm. The AI system's use directly led to these harms, fitting the definition of an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Endonezya'dan, Grok ile üretilen uygunsuz görüntüler nedeniyle X'e yasaklama uyarısı

2026-01-07
Haberler
Why's our monitor labelling this an incident or hazard?
Grok is an AI system capable of generating content, and its misuse to create non-consensual sexual images has caused harm to individuals' dignity and safety, which falls under violations of human rights and harm to communities. The Indonesian government's response and warnings about potential platform bans underscore the seriousness of the harm. The article reports ongoing harm and regulatory actions, not just potential risks, so this is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Indonesia temporarily blocks Grok over non-consensual deepfake obscene content

2026-01-10
anews
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Grok chatbot) that is generating harmful deepfake content without consent, which constitutes a violation of human rights and causes psychological and social harm. The harm is realized and ongoing, prompting government intervention. Therefore, this qualifies as an AI Incident due to direct harm caused by the AI system's outputs.
Thumbnail Image

Indonesia Temporarily Blocks X's Grok AI Feature

2026-01-10
Tempo English
Why's our monitor labelling this an incident or hazard?
An AI system (Grok) is explicitly mentioned as generating harmful and pornographic content, including non-consensual sexual deepfakes, which constitute violations of human rights and personal dignity. These harms have already occurred or are ongoing, prompting regulatory action. Therefore, this event qualifies as an AI Incident because the AI system's use has directly led to violations of human rights and harm to individuals and communities.
Thumbnail Image

Indonesia blocks Grok AI over deepfake pornography risks - ANTARA News Jawa Timur

2026-01-10
ANTARA News Jawa Timur
Why's our monitor labelling this an incident or hazard?
Grok AI is an AI system capable of generating deepfake content, and its misuse for non-consensual pornography directly harms individuals' rights and dignity, constituting a violation of human rights. The Indonesian government's action to block the platform is a response to these realized harms. The event clearly involves the use and misuse of an AI system leading to direct harm, meeting the criteria for an AI Incident rather than a hazard or complementary information. The focus is on the harm caused and regulatory response to that harm, not just potential future risks or general AI news.
Thumbnail Image

Indonesia blocks Grok AI over deepfake pornography risks

2026-01-10
Antara News
Why's our monitor labelling this an incident or hazard?
Grok AI is an AI system capable of generating content, including deepfake pornography. The misuse of this AI system has directly led to harm in the form of violations of human rights and psychological/social harm to individuals, particularly women and children. The blocking action is a response to realized harm caused by the AI system's outputs. Therefore, this event qualifies as an AI Incident because the AI system's use has directly led to significant harm, specifically violations of human rights and digital-based violence.
Thumbnail Image

Önce Endonezya şimdi de İngiltere: Grok yüzünden X'in "yasaklanma" ihtimali masada!

2026-01-09
Mynet
Why's our monitor labelling this an incident or hazard?
The AI system Grok is explicitly mentioned as generating fake sexual images without consent, which constitutes a violation of rights and harm to individuals and communities. The article details ongoing harm and regulatory responses to this misuse, indicating that the harm is materialized, not just potential. The involvement of the AI system in producing illegal and harmful content directly links it to the harms described. Hence, this event meets the criteria for an AI Incident rather than a hazard or complementary information.