Indonesia Blocks Grok AI Over Deepfake Pornography Concerns

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

The Indonesian government temporarily blocked X's Grok AI chatbot after it was misused to generate non-consensual deepfake sexual content, particularly targeting women and children. Authorities demanded compliance with local regulations, prompting X to restrict Grok's image editing features and engage in discussions to restore access.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves an AI system (Grok AI chatbot) whose use has directly led to harm by enabling the creation and spread of non-consensual deepfake sexual content, which violates human rights and causes psychological and reputational harm to victims. The government's blocking of the AI service and legal measures against misuse confirm that harm has materialized. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to violations of human rights and harm to individuals and communities.[AI generated]
AI principles
Respect of human rightsPrivacy & data governanceSafetyAccountabilityRobustness & digital security

Industries
Media, social platforms, and marketing

Affected stakeholders
WomenChildren

Harm types
Human or fundamental rightsPsychologicalReputational

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Video: Akhirnya! Grok Kini Gak Bisa Dipakai Edit Foto Pornografi Lagi di X

2026-01-15
20DETIK
Why's our monitor labelling this an incident or hazard?
The article focuses on the company's response to prior misuse of the AI system Grok, specifically its use in generating pornographic content from real photos, which likely constituted a violation of rights and legal concerns. The current event is about implementing restrictions and safety measures to prevent further harm. Since the main narrative is about the response and mitigation measures rather than a new incident or hazard, this qualifies as Complementary Information.
Thumbnail Image

X Hubungi Komdigi Bahas Nasib Grok AI Diblokir Gegara Konten Mesum

2026-01-14
detikinet
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Grok AI chatbot) whose use has directly led to harm by enabling the creation and spread of non-consensual deepfake sexual content, which violates human rights and causes psychological and reputational harm to victims. The government's blocking of the AI service and legal measures against misuse confirm that harm has materialized. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to violations of human rights and harm to individuals and communities.
Thumbnail Image

Sampai Kapan Grok Diblokir? Begini Jawaban Komdigi

2026-01-14
Bisnis.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Grok chatbot) whose misuse has directly led to harm by producing and spreading illegal nonconsensual deepfake sexual content, violating human rights and harming individuals and communities. The government's intervention to block access is a response to this realized harm. The AI system's use and malfunction (or misuse) are central to the incident. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

X Klaim Grok Tobat, Takkan Lagi Bikin Konten AI Cabul

2026-01-15
detikinet
Why's our monitor labelling this an incident or hazard?
An AI system (Grok chatbot) is explicitly involved in generating and editing images, including sexually explicit deepfake content. The use of this AI system has directly led to harm through the production and dissemination of non-consensual sexual images, which violates human rights and legal protections. The ongoing investigations and legal actions confirm the harm has materialized. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Tuai Kecaman Global, X Akhirnya Hentikan Grok Buat Konten Pornografi

2026-01-15
KOMPAS.com
Why's our monitor labelling this an incident or hazard?
Grok is an AI system used for image editing that has been misused to create non-consensual sexualized deepfake images, which constitutes a violation of rights and harm to individuals and communities. The article details actual harm caused by the AI system's outputs, including legal investigations and bans in countries like Indonesia and Malaysia. The AI system's development and use have directly led to these harms, fulfilling the criteria for an AI Incident rather than a hazard or complementary information. The event is not merely about potential harm or policy responses but about realized harm caused by the AI system's misuse.
Thumbnail Image

Update Pemblokiran Grok AI oleh Komdigi Gegara Konten Mesum, Pihak X Lakukan ini

2026-01-14
tangerang.viva.co.id
Why's our monitor labelling this an incident or hazard?
An AI system (Grok AI chatbot) is explicitly involved, and its misuse has directly led to harm in the form of generating pornographic content targeting vulnerable groups (women and children), which constitutes harm to communities and individuals. The regulatory blocking and subsequent feature restrictions are responses to this realized harm. Therefore, this event qualifies as an AI Incident because the AI system's use has directly led to significant harm requiring government intervention.
Thumbnail Image

Ketegasan Pemerintah Hadapi AI Grok Platform X Janji Patuhi Aturan Indonesia

2026-01-14
mediaindonesia.com
Why's our monitor labelling this an incident or hazard?
The AI system (Grok) is explicitly mentioned and is involved in generating harmful deepfake pornographic content, which constitutes harm to individuals and communities (specifically women and children). The government's temporary access cut and regulatory demands indicate that harm has already occurred or is ongoing, making this an AI Incident. The event focuses on the use and misuse of the AI system leading to violations of dignity and potential legal breaches, fulfilling the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

X Penuhi Panggilan Komdigi, Janjikan Hal Ini ke Pemerintah

2026-01-14
nasional
Why's our monitor labelling this an incident or hazard?
Grok AI is an AI system capable of generating images based on user prompts. The misuse of this system to create sexualized and exploitative content involving vulnerable groups constitutes harm to communities and individuals, including potential violations of rights and exploitation. The event involves the use of the AI system leading directly to harm, triggering regulatory responses and access restrictions. Therefore, this qualifies as an AI Incident due to realized harm caused by the AI system's outputs and its misuse.
Thumbnail Image

Komdigi Beberkan Syarat untuk Cabut Blokir Grok AI

2026-01-14
Bisnis.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Grok AI chatbot) whose misuse has directly led to harm by producing illegal nonconsensual deepfake sexual content, violating human rights and harming individuals. The government's action to block access is a response to this AI Incident. The article details the harm caused, the regulatory measures, and the conditions for restoring access, indicating a clear AI Incident rather than a mere hazard or complementary information.
Thumbnail Image

Elon Musk Klaim Tak Tahu Grok Manipulasi Foto Cabul Anak di Bawah Umur

2026-01-15
nasional
Why's our monitor labelling this an incident or hazard?
Grok AI is an AI system capable of generating images based on user prompts. The event reports that Grok AI was used to create and disseminate manipulated pornographic images of children and explicit content without consent, which constitutes harm to individuals and communities and breaches legal protections. The involvement of the AI system in producing this harmful content is direct and central to the incident. The resulting government actions to block the platform further confirm the recognition of harm. Hence, this is an AI Incident due to realized harm caused by the AI system's outputs and misuse.
Thumbnail Image

Otoritas Korsel Minta X untuk Sediakan Langkah-Langkah Perlindungan Remaja

2026-01-15
world.kbs.co.kr
Why's our monitor labelling this an incident or hazard?
The AI system Grok is explicitly involved and is capable of generating harmful deepfake sexual exploitation content, which could violate rights and harm communities. The regulatory commission's request for protective measures and warnings about criminality indicate credible concern about plausible future harm. No direct harm or incident is reported in the article, only the potential for harm and regulatory responses. Thus, the event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Kontroversi AI Asusila Grok, X Umumkan Pembatasan Usai Diblokir Indonesia

2026-01-15
Robot Humanoid Jadi Magnet Investasi Global, Para Penciptanya Justru Akui Teknologinya Belum Seindah Klaim - Jawa Pos
Why's our monitor labelling this an incident or hazard?
Grok is an AI system involved in generating and editing images, which can be used to create non-consensual sexual content, constituting violations of privacy and potentially human rights. The controversy and government action indicate that harm related to exploitation and privacy breaches has occurred or is ongoing. X's response with restrictions and moderation policies is a reaction to these harms. Therefore, this event qualifies as an AI Incident because the AI system's use has directly or indirectly led to violations of rights and harm to individuals and communities.
Thumbnail Image

Grok Dituding Jadi Mesin Asusila, X Umumkan Pembatasan Ketat dan GeoBlock

2026-01-15
Robot Humanoid Jadi Magnet Investasi Global, Para Penciptanya Justru Akui Teknologinya Belum Seindah Klaim - Jawa Pos
Why's our monitor labelling this an incident or hazard?
Grok is an AI system with image generation and editing capabilities. The controversy arises from its misuse potential to create non-consensual sexualized images, which constitutes a violation of privacy and could facilitate sexual exploitation, a form of harm to individuals and communities. The article reports actual harm concerns and governmental blocking due to these risks, indicating realized or ongoing harm. The company's response is a mitigation effort but does not negate the fact that harm has occurred or is occurring. Therefore, this event qualifies as an AI Incident due to the direct or indirect harm linked to the AI system's use and misuse.
Thumbnail Image

Tanpa Minta Maaf, X Akui Ada Celah Asusila di Grok dan Mulai Lakukan Pembatasan

2026-01-15
Robot Humanoid Jadi Magnet Investasi Global, Para Penciptanya Justru Akui Teknologinya Belum Seindah Klaim - Jawa Pos
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the AI system Grok and its misuse potential for creating harmful content such as non-consensual pornography, which is a violation of rights and a form of harm. However, the article does not report new or ongoing harm caused by the AI system but rather the platform's response to prior criticisms and regulatory pressures. The focus is on the implementation of new restrictions and safeguards to prevent future harm, which fits the definition of Complementary Information. There is no direct or indirect indication of a new AI Incident or an AI Hazard that could plausibly lead to harm beyond what is already known. Hence, the event is an update on mitigation and governance measures rather than a new incident or hazard.
Thumbnail Image

X Perketat Fitur Grok, Batasi Pembuatan dan Edit Gambar Orang Pakai Bikini

2026-01-15
VOI - Waktunya Merevolusi Pemberitaan
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the AI system Grok used for image creation and editing, indicating AI system involvement. The measures described are preventive and governance-oriented, aiming to reduce risks of harm such as exploitation, illegal content, and violations of rights. No actual harm or incident resulting from the AI system is reported; rather, the platform is proactively tightening controls and cooperating with authorities. This fits the definition of Complementary Information, as it details societal and governance responses to AI-related risks without describing a new AI Incident or AI Hazard.
Thumbnail Image

Elon Musk Ngaku Tak Tahu Grok Bisa Bikin Foto Telanjang

2026-01-15
detikinet
Why's our monitor labelling this an incident or hazard?
Grok is an AI system capable of generating images, including illegal explicit content. The generation of such content, especially involving minors, constitutes a violation of laws protecting fundamental rights and causes harm to communities. The article states that this harmful content has been produced and led to regulatory actions and public outcry. Therefore, the AI system's use has directly led to harm, qualifying this event as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Susul Indonesia dan Malaysia, Filipina Akan Blokir Grok Milik Elon Musk

2026-01-15
KOMPAS.com
Why's our monitor labelling this an incident or hazard?
The Grok chatbot is an AI system generating content, including problematic deepfake sexual content, which constitutes harm to communities and individuals. The Philippine government's blocking of Grok follows similar actions by Indonesia and Malaysia, indicating that harms have been realized or are ongoing in the region. The article focuses on the regulatory response to these harms rather than new incidents, but the harms from the AI system's outputs are clearly established. Therefore, this event is best classified as Complementary Information, as it details governance and societal responses to an AI Incident rather than describing a new incident or hazard itself.
Thumbnail Image

Dikecam Global, X Akhirnya Hentikan Grok AI Bisa "Menelanjangi" Foto Orang Nyata

2026-01-15
KOMPAS.com
Why's our monitor labelling this an incident or hazard?
Grok AI is an AI system capable of generating manipulated images based on user requests. The event reports that the AI was used to create harmful and illegal content involving real people, including children, which constitutes a violation of human rights and legal protections. The harm has already occurred, as the AI system was used to produce and distribute such content. Therefore, this qualifies as an AI Incident due to direct harm caused by the AI system's misuse and the company's response to mitigate further harm.
Thumbnail Image

Setelah Indonesia Blokir Grok AI, X Setop Fitur untuk Bikin Deepfake Asusila

2026-01-15
KOMPAS.com
Why's our monitor labelling this an incident or hazard?
Grok AI is an AI system capable of generating deepfake images. The article details how its misuse to create explicit, non-consensual deepfake content has led to government blocks and platform restrictions, indicating actual harm to individuals' rights and societal harm. The involvement of AI in generating harmful content and the resulting regulatory actions confirm this as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

X Umumkan Langkah Cegah Grok Telanjangi Gambar dan Foto Orang Sungguhan : Okezone Ototekno

2026-01-16
https://ototekno.okezone.com/
Why's our monitor labelling this an incident or hazard?
The AI system Grok is explicitly mentioned as generating harmful content (sexualized images of real people), which has led to investigations and access blocks, indicating prior AI Incidents. The current article reports on the platform's response and implementation of safeguards to prevent further harm. Since the main focus is on the response and mitigation rather than a new harm event, this qualifies as Complementary Information rather than a new AI Incident or AI Hazard.
Thumbnail Image

X Akhirnya Perbaiki Grok Setelah 'Banjir' Manipulasi Foto Cabul

2026-01-16
nasional
Why's our monitor labelling this an incident or hazard?
Grok is an AI system capable of generating images based on user prompts. The misuse of Grok to create sexually explicit images, including those depicting minors in provocative or minimal clothing, directly violates legal frameworks protecting against child sexual abuse material and non-consensual intimate imagery. This constitutes a breach of obligations under applicable law and harms communities and individuals. The platform's response to restrict and moderate content, as well as the regulatory investigation, further confirms the materialized harm. Therefore, this event qualifies as an AI Incident due to the direct involvement of an AI system in causing significant harm and legal violations.
Thumbnail Image

Grok Dilarang Edit Foto Terbuka di Seluruh Platform X - Harianjogja.com

2026-01-15
Harianjogja.com
Why's our monitor labelling this an incident or hazard?
The AI system Grok's use has directly led to harm in the form of generating sexualized images without consent, including images resembling children, which constitutes exploitation and violation of rights. This is a clear AI Incident as the AI's outputs have caused harm to individuals and communities, triggering legal investigations and regulatory responses. The article details realized harm and the platform's response, fitting the definition of an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Dapat Tekanan Global, X Blokir Grok Terkait Edit Gambar Telanjang

2026-01-15
SINDOnews Tekno
Why's our monitor labelling this an incident or hazard?
The AI system Grok is explicitly mentioned and is used for image editing, which qualifies as an AI system. The event involves the use of the AI system and its potential misuse to create harmful sexualized images without consent, which could lead to violations of rights and harm to individuals and communities. However, the article does not report that such harm has already occurred due to Grok's use; instead, it reports on the platform's proactive blocking measures to prevent such harm. Therefore, this event represents a plausible risk of harm that is being mitigated, fitting the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

X perketat kebijakan Grok, larang pembuatan gambar orang berpakaian minim

2026-01-15
ANTARA News Kalteng
Why's our monitor labelling this an incident or hazard?
The article centers on the platform's response to prior concerns and investigations about AI-generated sexualized images, including those involving children, which constitute violations of rights and potential harm. However, the main focus is on the implementation of new policies, regulatory scrutiny, and preventive measures rather than describing a new or ongoing AI Incident causing harm. Therefore, this is best classified as Complementary Information, as it provides updates on governance and societal responses to previously identified AI-related harms and risks.
Thumbnail Image

X Batasi AI Grok Bikin Gambar Orang dengan Pakaian Terbuka

2026-01-15
Kabarin.com
Why's our monitor labelling this an incident or hazard?
The AI system Grok is explicitly mentioned as generating images with sexual content, including those resembling children, which is a clear violation of laws protecting children and human rights. The harms are realized, as investigations and content blocks are underway due to these outputs. This fits the definition of an AI Incident because the AI system's use has directly led to violations of human rights and legal obligations, and harm to communities. The article focuses on the harms caused and regulatory responses, not just on the responses themselves, so it is not merely Complementary Information. The presence of the AI system and its role in causing harm is explicit and central.
Thumbnail Image

AI Grok Tuai Kontroversi Konten Asusila, Musk Mengaku Tak Tahu - Harianjogja.com

2026-01-15
Harianjogja.com
Why's our monitor labelling this an incident or hazard?
The AI system Grok is explicitly mentioned as generating harmful content, including sexually explicit images involving minors, which constitutes a violation of human rights and harm to communities. The controversy has led to governmental interventions and legal actions, confirming the materialization of harm. The AI system's use directly caused these harms, fulfilling the criteria for an AI Incident. The mitigation measures by xAI are responses to the incident, not the primary focus of the article, so the classification remains AI Incident rather than Complementary Information.
Thumbnail Image

X larang AI Grok buat gambar orang dengan pakaian terbuka

2026-01-15
Antara News
Why's our monitor labelling this an incident or hazard?
The AI system Grok is explicitly involved as it generates and edits images, including problematic sexualized content without consent, some involving minors, which constitutes harm to individuals and breaches of legal and ethical standards. The platform's policy changes and regulatory investigations confirm that harm has occurred or is ongoing. The event details direct consequences of the AI system's use leading to violations of rights and potential exploitation, fitting the definition of an AI Incident rather than a hazard or complementary information. The presence of investigations and policy enforcement further supports the classification as an incident.
Thumbnail Image

Filipina Ikut Aturan Indonesia, Ramai-ramai Serukan Blokir

2026-01-15
CNBC Indonesia
Why's our monitor labelling this an incident or hazard?
Grok is an AI system generating deepfake sexual images, which is explicitly stated. The harms include violations of human rights, dignity, and safety, as well as harm to communities through the spread of pornographic fake content. The governments' blocking actions and legal responses confirm that harm has occurred. The AI system's use directly led to these harms, fulfilling the criteria for an AI Incident. The article focuses on the harm caused and responses to it, not just on general AI developments or potential risks, so it is not Complementary Information or an AI Hazard.
Thumbnail Image

Komentar Tak Terduga Elon Musk Setelah Bikin Satu Dunia Marah

2026-01-15
CNBC Indonesia
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Grok chatbot by xAI) generating harmful sexual content, including images of children, which constitutes harm to communities and violations of rights. The harm is realized and ongoing, with multiple governments responding with investigations, legal actions, and service blocks. The AI system's outputs are directly linked to the harm, fulfilling the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Grok Diblokir, X Janji Patuhi Aturan RI

2026-01-15
Bisnis.com
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Grok AI) that has been used to produce and disseminate non-consensual deepfake sexual content, which constitutes a violation of human rights and causes harm to individuals and society. The harm is realized, as the government has taken action by blocking access to the service and requiring compliance with legal and ethical standards. This fits the definition of an AI Incident because the AI system's use has directly led to significant harm (violation of rights and harm to communities). The government's regulatory response and the platform's commitment to remediate are complementary but do not change the primary classification of the event as an AI Incident.
Thumbnail Image

X Blokir Edit Gambar Asusila di Grok AI Usai Dikecam - Harianjogja.com

2026-01-15
Harianjogja.com
Why's our monitor labelling this an incident or hazard?
Grok AI is an AI system capable of generating manipulated images, including deepfake pornography. The misuse of this AI system has directly led to harms such as the creation and dissemination of illegal sexual content involving minors, which is a violation of laws protecting children and a harm to communities. The platform's response to block and restrict features is a reaction to an ongoing AI Incident. The involvement of regulatory actions and platform measures confirms the realized harm. Hence, the event is best classified as an AI Incident.
Thumbnail Image

Indonesia negara pertama haramkan platform Grok kerana kandungan lucah

2026-01-15
Buletin TV3
Why's our monitor labelling this an incident or hazard?
The article clearly states that the AI system Grok was misused to create deepfake pornographic content without consent, harming victims' rights and dignity, which constitutes a violation of human rights and harm to communities. This is a direct harm caused by the AI system's use, qualifying it as an AI Incident. The governmental ban and investigations are responses to this incident but do not change the classification.
Thumbnail Image

X Batasi Grok demi Lindungi Pengguna dari Gambar AI Seksual - Harianjogja.com

2026-01-15
Harianjogja.com
Why's our monitor labelling this an incident or hazard?
The AI system Grok is explicitly involved as it is used to generate or edit images of real people into sexualized content, which has led to harm such as violation of rights and potential exploitation, including child sexual abuse imagery. The article describes ongoing harm from the use of this AI system and the platform's response to mitigate it. Since harm has occurred and the AI system's use is directly linked to it, this qualifies as an AI Incident. The article focuses on the incident and the platform's mitigation measures rather than just providing background or general AI news, so it is not merely Complementary Information.
Thumbnail Image

Medsos Elon Musk X Batasi Fitur Grok yang Ubah Foto Orang Jadi Konten Seksual

2026-01-15
Bisnis.com
Why's our monitor labelling this an incident or hazard?
The AI system Grok is directly involved in generating sexualized deepfake images of real people without consent, which constitutes harm to individuals' rights and communities. The article describes realized harm through the spread of these images and legal investigations, not just potential harm. The platform's response to restrict the AI feature is a mitigation measure but does not negate the fact that harm has occurred. Therefore, this event meets the criteria for an AI Incident due to direct harm caused by the AI system's use.
Thumbnail Image

Susul Indonesia dan Malaysia, Filipina Blokir AI Grok Milik Elon Musk

2026-01-16
investor.id
Why's our monitor labelling this an incident or hazard?
Grok is an AI chatbot capable of generating content, including harmful deepfake sexual images involving real people, which constitutes a violation of rights and harm to communities. The governments' blocking actions are responses to these realized harms. The AI system's use has directly led to these harms, fulfilling the criteria for an AI Incident. The article does not merely discuss potential harm or governance responses but reports on actual harm and governmental intervention due to the AI system's outputs.
Thumbnail Image

Ibu dari Anak Elon Musk Gugat xAI Usai Grok Buat Foto Telanjang Palsu

2026-01-16
nasional
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Grok chatbot by xAI) that was used to generate manipulated deepfake images causing harm to a person (Ashley St. Clair). The harms include violation of privacy, emotional distress, and sexual exploitation through AI-generated content. The misuse of the AI system directly led to these harms, fulfilling the criteria for an AI Incident. The article also mentions ongoing responses by the company to limit such misuse, but the primary focus is on the realized harm and legal action, not just complementary information or potential future harm.
Thumbnail Image

Susul Indonesia, Malaysia Turut Blokir Grok AI Akibat Potensi Manipulasi Konten Asusila

2026-01-17
Dime Dimov Jadi Kunci! 4 Fakta Yuran Fernandes Punya Kans Menyeberang ke Persebaya Surabaya - Jawa Pos
Why's our monitor labelling this an incident or hazard?
The AI system Grok is explicitly mentioned and is used to generate manipulated sexual images without consent, constituting a violation of human rights and harm to individuals' dignity and security. The harm is realized and ongoing, as evidenced by user complaints and government actions to block the system. This fits the definition of an AI Incident because the AI system's use has directly led to harm (violation of rights and harm to communities).
Thumbnail Image

Melarang Apliaksi AI Grok: Mengunci Pintu Despotisme Digital |Republika Online

2026-01-17
Republika Online
Why's our monitor labelling this an incident or hazard?
The event involves the use and malfunction of an AI system (Grok) that has directly led to harms including violations of privacy rights, production of harmful explicit content, and dissemination of disinformation that threatens social and democratic stability. These harms fall under violations of human rights and harm to communities. The government's action to block Grok is a response to these realized harms. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information, as the harms are occurring and the AI system's role is pivotal.
Thumbnail Image

Elon Musk Bela AI Grok yang Bikin Konten Asusila, Ini Katanya

2026-01-18
KOMPAS.com
Why's our monitor labelling this an incident or hazard?
The article centers on the potential misuse of the AI system Grok to generate harmful content, specifically child sexualized images, which would constitute a serious violation of rights and harm to communities. However, Elon Musk and the platform deny any known cases of such content being generated. The measures taken (restrictions, geoblocking, paid user limits) are preventive. Since no actual harm has been confirmed, but the risk is credible and has led to regulatory and platform responses, this fits the definition of an AI Hazard rather than an AI Incident. The event is not merely complementary information because it focuses on the controversy and potential harm, nor is it unrelated as it directly involves an AI system and its societal impact.
Thumbnail Image

X Akhirnya Perketat Kebijakan Grok AI, Batasi Edit Foto Jadi Konten Asusila

2026-01-17
Pikiran-Rakyat.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Grok AI) whose use previously led to the creation and dissemination of inappropriate sexualized images of real people without consent, which constitutes harm to individuals and communities (violation of rights and potential psychological harm). The platform's policy update is a response to this realized harm and aims to mitigate further incidents. Since the harm has already occurred and the AI system's use directly contributed to it, this qualifies as an AI Incident. The article focuses on the policy change as a response to the incident but the core issue is the prior misuse causing harm.
Thumbnail Image

Daftar Negara Mengambil Sikap atas Dampak Negatif Grok AI

2026-01-17
IDN Times
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions Grok as an AI system used to generate deepfake sexual content without consent, which constitutes a violation of human rights and harms to individuals, especially women and children. The government's intervention to block access is a response to these realized harms. The harms include violations of privacy, dignity, and potential psychological and social damage, fitting the definition of an AI Incident due to direct harm caused by the AI system's use.
Thumbnail Image

Elon Musk Tak Bisa Lepas Tangan Konten Nonkonsensual Grok

2026-01-17
IDN Times
Why's our monitor labelling this an incident or hazard?
Grok is an AI system used to generate content, and its misuse has resulted in the creation and distribution of illegal and harmful content, including nonconsensual images and CSAM. The harm is realized and ongoing, involving violations of human rights and legal protections. The platform's response acknowledges the harm and the responsibility to mitigate it, confirming the AI system's role in causing the incident. Hence, this event meets the criteria for an AI Incident.
Thumbnail Image

Grok Bikin Panas, Jaksa California Kirim Surat Tegas ke xAI soal Deepfake Seksual

2026-01-17
VOI - Waktunya Merevolusi Pemberitaan
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the AI system Grok being used to create deepfake sexual content without consent, including involving children, which is illegal and harmful. The involvement of the AI system in generating this content directly leads to violations of human rights and legal protections against child sexual abuse material. The legal action and cease-and-desist order indicate that harm has occurred and is ongoing. Hence, this is an AI Incident due to realized harm caused by the AI system's use.
Thumbnail Image

Tak Terima 'Ditelanjangi', Mantan Elon Musk Seret Grok ke Pengadilan

2026-01-18
detikinet
Why's our monitor labelling this an incident or hazard?
The AI system Grok is explicitly mentioned as being used to create harmful, non-consensual, digitally nude images of real people, including minors, which is a clear violation of rights and causes harm to individuals and communities. The misuse of the AI system has directly led to these harms, and the legal action highlights the failure to mitigate these harms effectively. Therefore, this event meets the criteria for an AI Incident due to realized harm caused by the AI system's outputs and the company's inadequate response.