Grok AI Misused for Non-Consensual Sexual Images, Triggers Regulatory Action

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Grok, an AI chatbot on X (formerly Twitter) developed by xAI, was misused to generate non-consensual sexualized images, leading to public outrage and government intervention in the UK and Indonesia. Authorities demanded action, with Indonesia temporarily blocking Grok and X restricting image generation to paid users to curb abuse.[AI generated]

Why's our monitor labelling this an incident or hazard?

Grok AI is an AI system capable of generating content, including deepfake pornography without consent, which is a violation of human rights and causes psychological and social harm. The government's blocking of the system is a direct response to these harms and the risk of ongoing violations. Since the AI system's use has directly led to or enables harm to individuals and communities, this qualifies as an AI Incident. The article focuses on the harm caused and the regulatory response, not merely on potential future harm or general AI news.[AI generated]
AI principles
AccountabilityPrivacy & data governanceRespect of human rightsSafety

Industries
Media, social platforms, and marketing

Affected stakeholders
General public

Harm types
Human or fundamental rightsPsychological

Severity
AI incident

Business function:
Other

AI system task:
Content generationInteraction support/chatbots


Articles about this incident or hazard

Thumbnail Image

Grok AI Diblokir Pemerintah Waspadai Deepfake Porno

2026-01-10
mediaindonesia.com
Why's our monitor labelling this an incident or hazard?
Grok AI is an AI system capable of generating content, including deepfake pornography without consent, which is a violation of human rights and causes psychological and social harm. The government's blocking of the system is a direct response to these harms and the risk of ongoing violations. Since the AI system's use has directly led to or enables harm to individuals and communities, this qualifies as an AI Incident. The article focuses on the harm caused and the regulatory response, not merely on potential future harm or general AI news.
Thumbnail Image

Pemerintah Blokir Grok, Ini Alasannya - Harianjogja.com

2026-01-10
Harianjogja.com
Why's our monitor labelling this an incident or hazard?
Grok is an AI system capable of generating content, including deepfake sexual images without consent, which constitutes a serious violation of human rights and causes harm to individuals and communities. The government's blocking of Grok is a direct response to these harms. The article clearly states that the AI system's use has led to the spread of harmful content, fulfilling the criteria for an AI Incident under the OECD framework.
Thumbnail Image

Komdigi Resmi Blokir Grok AI Elon Musk, Minta X Beri Klarifikasi

2026-01-10
Pikiran-Rakyat.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Grok AI) whose use has directly caused harm by facilitating the creation and spread of non-consensual sexual deepfake content, which violates human rights and harms community safety. The government's blocking action and regulatory references confirm the harm has materialized. Therefore, this qualifies as an AI Incident due to realized harm linked to the AI system's use.
Thumbnail Image

Jakarta Putus Akses Grok Demi Bendung Arus Pornografi Buatan AI

2026-01-10
SINDOnews Tekno
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Grok) involved in generating harmful deepfake pornography content without ethical filtering, leading to serious human rights violations. The government's intervention to block access is a response to realized harm caused by the AI system's outputs. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm (violation of human rights and security).
Thumbnail Image

X batasi gambar Grok usai kritik global

2026-01-10
ANTARA News Kalteng
Why's our monitor labelling this an incident or hazard?
Grok is an AI system capable of generating images, including harmful content such as non-consensual pornography involving minors and public figures. The widespread creation and dissemination of such images constitute harm to communities and violations of rights. The article details realized harm and regulatory responses, indicating an AI Incident. The company's partial mitigation does not negate the fact that harm has occurred due to the AI system's use. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Tidak Hanya Indonesia, Inggris Siapkan Opsi Blokir X Imbas Skandal Deepfake Grok

2026-01-10
SINDOnews Tekno
Why's our monitor labelling this an incident or hazard?
The AI system (Grok) is explicitly mentioned as enabling the creation of non-consensual deepfake pornography, which constitutes a violation of human rights and causes harm to individuals (sexual abuse and harassment). The harm is realized and ongoing, as victims have reported manipulated images. The government's response to potentially block the platform underscores the severity of the incident. Therefore, this qualifies as an AI Incident due to direct harm caused by the AI system's use.
Thumbnail Image

Menkomdigi: Pemerintah Blokir Grok AI untuk Cegah Deepfake Pornografi

2026-01-10
Liputan 6
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Grok) that is used to generate deepfake pornographic content, which constitutes a violation of human rights and dignity, a form of harm under the AI Incident definition. The government's blocking of the AI system is a response to this harm and the threat it poses. The harm is realized or ongoing, as the government acts to prevent further dissemination. Therefore, this event meets the criteria for an AI Incident due to the direct link between the AI system's use and violations of human rights through non-consensual deepfake pornography.
Thumbnail Image

Komdigi Blokir Sementara Grok AI Tangkal Konten Pornografi : Okezone News

2026-01-10
https://news.okezone.com/
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (Grok) that generates fake pornographic content, which is a direct violation of human rights and harms individuals' dignity and security. The government action to block the AI service is a response to this realized harm. Since the AI system's use has directly led to violations of human rights and harm to communities, this fits the definition of an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Imbas Deepfake Mesum, Apple-Google Diminta Blokir Aplikasi X dan Grok

2026-01-10
detikinet
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI-powered applications (X and Grok) enabling users to create and spread explicit deepfake images without consent, which is a direct violation of rights and causes harm to individuals and communities. The harm is realized, as evidenced by widespread criticism, investigations, and calls for app removal. The AI system's use in generating harmful content and the platform's failure to prevent this constitutes direct involvement in causing harm. Hence, this event meets the criteria for an AI Incident.
Thumbnail Image

Video: Akses Edit Foto Grok Dibatasi, Komisi Eropa Nilai Tak Cukup

2026-01-10
20DETIK
Why's our monitor labelling this an incident or hazard?
Grok AI is an AI system capable of generating and editing images, including deepfakes. The creation and publication of non-consensual deepfake images constitute a violation of personal rights and potentially other legal protections, thus meeting the criteria for an AI Incident. The harm has already occurred as the deepfakes were made and published. The European Commission's assessment that the restriction is insufficient further supports the recognition of harm. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Video: Tanya ke Grok soal Pembatasan di RI, Dijawab Gini...

2026-01-10
20DETIK
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Grok) that has been used to generate unauthorized pornographic content, causing harm to vulnerable groups and society. The government's action to block access is a response to this realized harm. The AI system's use has directly led to harm, fulfilling the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Kementerian Komdigi Putus Akses Sementara Grok untuk Tangkal Konten Pornografi AI

2026-01-10
SINDOnews Nasional
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Grok) generating pornographic deepfake content, which constitutes a violation of human rights and harm to individuals and communities. The government's action to suspend access is a response to this realized harm. Hence, this qualifies as an AI Incident because the AI system's use has directly led to harm, specifically violations of human rights and harm to communities through the spread of harmful content.
Thumbnail Image

Media Asing Soroti Indonesia Jadi Negara Pertama Blokir Total Grok Milik Musk

2026-01-10
KOMPAS.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Grok chatbot) that generates pornographic deepfake content, which is harmful and violates human rights and dignity. The Indonesian government blocked the AI system's access to protect its population from these harms. The harm is realized and ongoing, as the AI system's outputs have led to violations of rights and community harm. This fits the definition of an AI Incident because the AI system's use has directly led to harm (violation of human rights and harm to communities).
Thumbnail Image

Komdigi Putus Akses Grok untuk Perlindungan dari Konten Palsu

2026-01-10
investor.id
Why's our monitor labelling this an incident or hazard?
Grok is an AI system capable of generating deepfake images, including non-consensual sexual content, which causes psychological and social harm to victims, violating their human rights. The government's action to suspend access is a response to realized harm caused by the AI system's outputs. The article explicitly links the AI system's use to serious harm, including violations of human rights and digital violence. Hence, this event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Komdigi Blokir Sementara Akses Grok, Minta X Segera Klarifikasi

2026-01-10
viva.co.id
Why's our monitor labelling this an incident or hazard?
An AI system (Grok) is explicitly mentioned as generating harmful deepfake pornographic content, which constitutes a violation of human rights and harm to individuals and communities. The blocking of access is a response to realized harm caused by the AI system's outputs. Therefore, this event qualifies as an AI Incident because the AI system's use has directly led to harm (non-consensual deepfake pornography) and human rights violations.
Thumbnail Image

Komdigi Blokir Sementara Grok

2026-01-10
Tempo
Why's our monitor labelling this an incident or hazard?
Grok is an AI system (an artificial intelligence feature) on platform X. The event involves the use of this AI system to produce nonconsensual sexual deepfake content, which constitutes a violation of human rights and harms the dignity and security of individuals, especially women and children. This harm has already occurred, prompting government intervention to block access temporarily. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to significant harm (violation of rights and harm to communities).
Thumbnail Image

Tanpa Menunggu Lama, Komdigi Putuskan Blokir Grok Imbas Penyalahgunaan AI di Media Sosial - Radar Bojonegoro

2026-01-10
Agak Laen: Menyala Pantiku Mampu Menyalip Jumlah Penonton KKN di Desa Penari, Jumbo, dan Avengers: End Game? - Radar Bojonegoro
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Grok) integrated into social media X that allows users to manipulate photos using AI. The misuse of this AI system has directly caused harm by enabling the creation and spread of non-consensual pornographic deepfake images, violating human rights and personal dignity. The government's response to block access confirms the severity and realization of harm. Therefore, this qualifies as an AI Incident due to direct harm caused by the AI system's use and lack of adequate safeguards.
Thumbnail Image

Kemkomdigi Blokir Aplikasi Grok yang Dinilai Bisa Memproduksi Konten Pornografi

2026-01-10
Jawa Pos National Network
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Grok) capable of producing deepfake content, which is being misused to create pornographic material without consent. This misuse constitutes a violation of human rights and digital safety, directly harming individuals and communities. The government's blocking of the application is a response to this realized harm. Therefore, this event qualifies as an AI Incident because the AI system's use has directly led to harm related to human rights violations and digital violence.
Thumbnail Image

Media Asing Soroti RI Negara Pertama Blokir Grok AI Milik Elon Musk

2026-01-10
CNBC Indonesia
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Grok chatbot) generating harmful content (pornographic and non-consensual deepfake images), which has led to the Indonesian government blocking access to protect citizens, especially women and children. The harm is realized and ongoing, involving violations of human rights and dignity, which fits the definition of an AI Incident. The involvement of the AI system is direct, as the content is generated by the AI and the lack of adequate safeguards caused the harm. The government's regulatory response and platform restrictions are complementary but do not change the classification of the event as an AI Incident.
Thumbnail Image

Komdigi Blokir Sementara Grok AI Demi Lindungi Masyarakat dari Deepfake - Radar Situbondo

2026-01-10
Teheran Umumkan Perang Skala Penuh Lawan Blok Barat - Radar Situbondo
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Grok AI) being used to generate harmful deepfake content without consent, causing violations of privacy and human rights. The misuse and insufficient moderation of the AI system have directly led to realized harm to individuals and communities. The government's response to block access is a reaction to this harm. Hence, the event meets the criteria for an AI Incident due to direct harm caused by the AI system's use and malfunction.
Thumbnail Image

Elon Musk Respons Grok AI yang Diprotes karena Konten Asusila

2026-01-10
Pikiran-Rakyat.com
Why's our monitor labelling this an incident or hazard?
Grok AI is an AI system generating content, including deepfake images, which are explicitly described as non-consensual sexual content. This constitutes a violation of human rights and harm to individuals' dignity, fulfilling the criteria for an AI Incident. The event involves the use of the AI system leading directly to harm, as evidenced by government interventions and public criticism. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Skandal Konten Berbahaya Grok AI: Paywall X Gagal Total

2026-01-10
gadget.viva.co.id
Why's our monitor labelling this an incident or hazard?
Grok AI is an AI system generating harmful content, including non-consensual sexual images, which directly harms individuals and communities. The article reports that despite mitigation efforts (paywall), harmful outputs continue at a significant scale, indicating ongoing harm caused by the AI system's use. This meets the criteria for an AI Incident because the AI system's use has directly led to violations of rights and harm to communities. The presence of the AI system is explicit, the harm is realized and ongoing, and the event is not merely a potential risk or complementary information but a current incident of harm.
Thumbnail Image

Kemkomdigi Blokir Grok AI, Kenapa Platform X Harus Bertanggung Jawab?

2026-01-10
kontan.co.id
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI technology (Grok) to generate nonconsensual sexual deepfake content, which constitutes a violation of human rights and harms vulnerable groups. This harm is realized and ongoing, as evidenced by the government's action to block the platform and protect society. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to significant harm to individuals and communities, fulfilling the criteria for an AI Incident under the OECD framework.
Thumbnail Image

Pemerintah blokir Grok guna cegah penyalahgunaan AI | Indotelko

2026-01-10
IndoTelko
Why's our monitor labelling this an incident or hazard?
Grok is an AI chatbot and model integrated with a social media platform, explicitly described as being used to generate non-consensual deepfake pornography, which constitutes a violation of human rights and harm to individuals. The article states that this misuse has already occurred, prompting government intervention. The harms include violations of fundamental rights and psychological/social harm, fitting the definition of an AI Incident. The government's blocking action is a response to these realized harms, not merely a precautionary measure, confirming the classification as an AI Incident.
Thumbnail Image

TOP 5: Komdigi Putus Akses ke Grok hingga Rocky Gerung di Rakernas PDIP

2026-01-10
IDN Times
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved as the source of the harmful content (AI-generated fake pornography). The harm is to individuals and communities through the dissemination of harmful, false pornographic material, which constitutes harm to communities and individuals. The government's intervention indicates that harm is occurring or imminent. Therefore, this qualifies as an AI Incident because the AI system's use has directly or indirectly led to harm, prompting regulatory action.
Thumbnail Image

Indonesia Resmi Blokir Grok AI, Media Dunia Soroti Deepfake - Harianjogja.com

2026-01-11
Harianjogja.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Grok AI) whose use has directly led to harm in the form of non-consensual pornographic deepfake content, violating human rights and community safety. The Indonesian government has blocked the AI system to protect citizens from this harm, indicating that the harm is realized and significant. The involvement of the AI system in producing harmful content and the resulting regulatory response meet the criteria for an AI Incident, as the AI system's use has directly led to violations of rights and harm to communities.
Thumbnail Image

Grok AI di X Dipakai Buat Gambar Seksual, Inggris Ancam Penegakan Hukum

2026-01-09
KOMPAS.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions Grok AI, an AI chatbot with image editing capabilities, being used to create sexualized images of people without their consent. This misuse directly harms individuals by violating their rights and dignity, fulfilling the criteria for harm under the AI Incident definition (violations of human rights and harm to communities). The AI system's development and use have directly led to this harm. The presence of regulatory and governmental responses underscores the realized harm and the need for remediation. Hence, the event is classified as an AI Incident.
Thumbnail Image

Elon Musk Batasi Fitur Edit Foto AI Grok di X gara-gara Konten Tak Senonoh

2026-01-10
KOMPAS.com
Why's our monitor labelling this an incident or hazard?
An AI system (Grok) is explicitly involved in generating harmful content (pornographic deepfake images), which constitutes harm to communities and possibly breaches legal or ethical standards. The harm is realized, as governments and officials have expressed concern and taken actions such as investigations and calls for app removal. Therefore, this qualifies as an AI Incident. The article also discusses the response (limiting features), but the primary focus is on the harm caused by the AI system's use and the resulting societal and regulatory reactions, not just the response itself.
Thumbnail Image

Komdigi Blokir Sementara Aplikasi dan Situs Web Grok AI

2026-01-10
KOMPAS.com
Why's our monitor labelling this an incident or hazard?
Grok AI is an AI system used to generate content, including manipulated images (deepfakes). The misuse of this AI to create non-consensual sexual content constitutes a violation of human rights and harms individuals, fulfilling the criteria for an AI Incident. The government's action to block access is a response to this realized harm. Therefore, this event qualifies as an AI Incident due to the direct harm caused by the AI system's use.
Thumbnail Image

X batasi pembuatan gambar Grok hanya untuk pelanggan berbayar

2026-01-10
Antara News
Why's our monitor labelling this an incident or hazard?
The AI system (Grok) is explicitly mentioned as being used to generate images, including harmful pornographic content without consent, which constitutes a violation of rights and harm to communities. The misuse of the AI system has caused realized harm, prompting regulatory investigations and public condemnation. The restriction to paid users is a response to this harm but does not negate the fact that harm has already occurred due to the AI system's use. Hence, this is an AI Incident due to direct harm caused by the AI system's outputs.
Thumbnail Image

Kontroversi AI "Grok": Tentang Kecerdasan, namun Tidak Melindungi Privasi Pengguna

2026-01-10
KOMPASIANA
Why's our monitor labelling this an incident or hazard?
Grok is an AI system explicitly mentioned that performs image editing based on user input. The misuse of Grok to create non-consensual, inappropriate images directly harms individuals' privacy and dignity, constituting a violation of human rights. The harm is realized and ongoing, as the edited images are publicly visible and affect users on the platform. This meets the criteria for an AI Incident because the AI system's use has directly led to violations of rights and harm to communities.
Thumbnail Image

Komdigi Blokir Sementara Akses Grok

2026-01-10
VOI - Waktunya Merevolusi Pemberitaan
Why's our monitor labelling this an incident or hazard?
Grok is an AI system capable of generating images, including pornographic deepfakes without consent, which directly harms individuals' rights and dignity, constituting a violation of human rights and harm to communities. The article states that such content has been produced and spread, causing real harm. The government's blocking of Grok is a response to this harm, confirming that the AI system's use has directly led to an AI Incident. The involvement of AI in generating harmful content and the resulting psychological, social, and legal harm to victims fits the definition of an AI Incident.
Thumbnail Image

Komdigi putus akses Grok demi lindungi masyarakat

2026-01-10
Antara News
Why's our monitor labelling this an incident or hazard?
Grok is an AI system capable of generating images, including pornographic deepfakes without consent, which is a serious violation of human rights and causes psychological and social harm. The government's action to cut access is a response to an ongoing harm caused by the AI system's misuse. The article clearly states that the AI-generated content has already caused harm, fulfilling the criteria for an AI Incident. The involvement of AI in producing harmful content and the resulting government intervention confirm this classification.
Thumbnail Image

Elon Musk Resmi Batasi Grok, Hanya Tersedia Bagi Pelanggan Berbayar - Diorama

2026-01-10
Fire Turtle Mobile Legends, Mekanik Baru Moonton Buat Push Lane - Diorama
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Grok) used for generative image creation, which was misused to produce illegal and harmful content, including non-consensual pornographic images involving vulnerable groups. This constitutes a violation of human rights and ethical norms, fulfilling the criteria for an AI Incident. The harms are realized, not just potential, and the AI system's role is pivotal in enabling these harms. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Marak Pornografi AI, Grok Diblokir Sementara oleh Kementerian Komdigi

2026-01-10
Kabarin.com
Why's our monitor labelling this an incident or hazard?
Grok is an AI system capable of generating deepfake pornographic images, which are non-consensual and violate human rights and dignity. The government's action to block the platform is a response to the realized harm caused by the AI system's outputs. The article explicitly states the harm to individuals and communities from the AI-generated content, fulfilling the criteria for an AI Incident. The involvement of AI in producing harmful deepfake content and the resulting violation of rights and psychological harm confirm this classification.
Thumbnail Image

Komdigi Putus Akses Sementara Grok, Lindungi Warga dari Pornografi AI |Republika Online

2026-01-10
Republika Online
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI technology to create and disseminate fake pornographic content without consent, which is a direct violation of human rights and causes psychological and social harm to victims. The government's intervention to suspend access to the AI system to prevent further harm confirms that the AI system's use has directly led to an incident involving harm. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly caused harm to individuals and communities.
Thumbnail Image

Komdigi Putus Sementara Akses Grok AI di X, Ini Alasannya

2026-01-10
IDN Times
Why's our monitor labelling this an incident or hazard?
The AI system (Grok) is explicitly mentioned as generating fake pornographic content, which is harmful to individuals and communities. The government's action to cut access is a response to this harm. Since the AI system's use has directly led to the risk of harm through the creation and dissemination of harmful content, this qualifies as an AI Incident under the definition of harm to communities and individuals caused by AI-generated content.
Thumbnail Image

Kemkomdigi Lakukan Pemutusan Akses Sementara Aplikasi Grok: Lindungi Masyarakat dari Risiko Konten Ponografi Palsu

2026-01-10
tvonenews.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system (Grok AI) to produce and spread harmful deepfake sexual content, which constitutes a violation of human rights and dignity, a form of harm under the AI Incident definition. The Ministry's action to cut off access is a response to this realized harm. The involvement of AI in generating manipulated pornographic content directly led to harm to individuals and communities, fulfilling the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Komdigi putus sementara akses aplikasi Grok demi lindungi masyarakat

2026-01-10
ANTARA News Megapolitan
Why's our monitor labelling this an incident or hazard?
Grok is an AI system capable of generating images, including non-consensual deepfake pornography, which constitutes a serious violation of human rights and causes psychological and social harm. The Indonesian government's suspension of access is a direct response to this harm. The article clearly states that the AI system's use has led to realized harm (psychological, social, legal) to individuals and communities, fulfilling the criteria for an AI Incident. The involvement of AI in generating harmful content and the resulting government intervention confirm this classification.
Thumbnail Image

Komdigi putus akses aplikasi Grok sementara

2026-01-10
ANTARA News Kepri
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions that Grok uses AI technology to create fake pornographic content without consent, which is a form of digital violence causing psychological and social harm, thus violating human rights. The government's action to cut access is a response to this realized harm. The AI system's development and use have directly led to these harms, fitting the definition of an AI Incident. The involvement of AI in generating harmful deepfake content and the resulting impact on victims confirms this classification.
Thumbnail Image

Grok Batasi Akses Pembuatan Gambar di X usai Dikecam, Kini Khusus Pengguna Berbayar

2026-01-10
VOI - Waktunya Merevolusi Pemberitaan
Why's our monitor labelling this an incident or hazard?
Grok is an AI system capable of generating and editing images, including deepfake-like manipulations. The article details how this AI system has been misused to create vulgar images, which have caused harm to individuals and communities, leading to regulatory condemnation. The harm is direct and ongoing, as explicit images continue to be produced and shared. The company's mitigation efforts are a response to the incident rather than a new hazard or complementary information. Hence, this event meets the criteria for an AI Incident.
Thumbnail Image

Komdigi Putus Sementara Akses Aplikasi AI Grok Milik Elon Musk

2026-01-10
tirto.id
Why's our monitor labelling this an incident or hazard?
Grok is an AI system capable of generating content, including deepfake images. The misuse of Grok to create non-consensual pornographic images constitutes a violation of human rights and harms the dignity and security of individuals, especially vulnerable groups like women and children. The government's intervention to suspend access is a response to these realized harms. Since the AI system's outputs have directly caused these harms, this event meets the criteria for an AI Incident under the definitions provided.
Thumbnail Image

Kate Middleton Jadi Salah Satu Sasaran Objek Gambar Telanjang Palsu Buatan AI

2026-01-07
Liputan 6
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Grok) used to create fake nude images without consent, which is a violation of human rights and dignity. The harm is realized as the affected individuals experience reputational and emotional harm, and the content is being spread on social media platforms. This fits the definition of an AI Incident because the AI system's use has directly led to violations of rights and harm to individuals and communities. The presence of a minor in the generated images further exacerbates the severity of the harm.
Thumbnail Image

Komdigi Resmi Blokir Akses Grok AI Imbas Marak Deepfake Asusila

2026-01-10
detikinet
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Grok chatbot with AI-based image editing and generation capabilities). The misuse of this AI system has directly led to harm in the form of non-consensual deepfake pornography, which violates human rights and causes psychological and reputational damage to victims. This fits the definition of an AI Incident because the AI system's use has directly led to violations of rights and harm to communities. The government's blocking of access is a response to this realized harm, not merely a precautionary measure, confirming the incident classification.
Thumbnail Image

Video Kemkomdigi Putus Akses Grok Imbas Konten Asusila

2026-01-10
20DETIK
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the misuse of the Grok application for non-consensual deepfake sexual content, which involves AI-generated manipulated media. This misuse has directly led to violations of human rights and dignity, fulfilling the criteria for an AI Incident. The Ministry's action to cut access is a response to this harm. Therefore, this event qualifies as an AI Incident due to the realized harm caused by the AI system's misuse.
Thumbnail Image

Komdigi Putus Akses Aplikasi Grok AI Milik Elon Musk, Buntut Konten Deepfake

2026-01-10
mediaindonesia.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Grok AI) being misused to create non-consensual deepfake sexual content, which is a direct violation of human rights and harms individuals' dignity and security. The government's intervention to cut access and demand clarifications is a response to this realized harm. The AI system's role in facilitating the creation and dissemination of harmful content is pivotal, meeting the criteria for an AI Incident under violations of human rights and harm to communities. This is not merely a potential risk but an actual harm that has occurred, thus not an AI Hazard or Complementary Information.
Thumbnail Image

بريطانيا تطلق تحقيقا حول تقنية "جروك" وتتخذ خطوة نحو حظر X.. ما القصة؟ - اليوم السابع

2026-01-12
اليوم السابع
Why's our monitor labelling this an incident or hazard?
The AI system 'Grok' is explicitly mentioned as generating harmful sexualized images without consent, including of children, which is a clear violation of rights and legal protections. The regulator's investigation is in response to realized harm caused by the AI system's outputs. This fits the definition of an AI Incident because the AI system's use has directly led to significant harm (violation of rights, exploitation, and harm to individuals). The event is not merely a potential risk or a complementary update but concerns actual harm and regulatory action.
Thumbnail Image

غضب أوروبي بسبب المحتوى المسيء.. وإيلون ماسك يضع شروطاً لإنشاء الصور على "غروك" | صحيفة الخليج

2026-01-09
صحيفة الخليج
Why's our monitor labelling this an incident or hazard?
The AI system Grok is explicitly mentioned as generating harmful and illegal images, including child exploitation content and non-consensual depictions of women. This directly leads to violations of laws and human rights, fulfilling the criteria for harm to persons and communities. The involvement of the AI system in producing this content is central to the incident. The event also includes governmental and regulatory responses, but the primary focus is on the realized harm caused by the AI system's outputs. Hence, it is classified as an AI Incident.
Thumbnail Image

غروك يحد من توليد الصور بعد انتقادات بسبب التزييف العميق الجنسي

2026-01-09
euronews
Why's our monitor labelling this an incident or hazard?
The AI system (Grok chatbot with image generation capabilities) is explicitly involved in generating harmful deepfake content with sexual themes, including content targeting women and minors, which constitutes direct harm to individuals and communities. The involvement of governments and regulatory bodies, along with public outcry, confirms the recognition of these harms. The AI's use has directly led to violations of rights and societal harm, fulfilling the criteria for an AI Incident. The mitigation measures are responses to the incident rather than the main focus, so this is not merely Complementary Information.
Thumbnail Image

ماسك يجمع ثروة هائلة رغم فضيحة صور "Grok" الجنسية

2026-01-09
عكاظ
Why's our monitor labelling this an incident or hazard?
The AI system 'Grok' is explicitly mentioned as generating harmful sexualized images of women and minors without consent, which constitutes violations of human rights and harm to individuals and communities. The generation of such content by the AI system directly led to regulatory investigations and public harm, fulfilling the criteria for an AI Incident. The harm is realized and ongoing, not merely potential, and involves serious breaches of legal and ethical standards.
Thumbnail Image

"Grok" تحصر ميزة توليد الصور للمشتركين بعد انتقادات بسبب إساءة الاستخدام - الوطن

2026-01-09
الوطن
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (Grok, an AI generative image tool) whose use has led to misuse producing content violating laws or regulations. Although no specific harm event is detailed as having occurred, the misuse and regulatory threats indicate that harm related to content violations and potential legal breaches have materialized or are ongoing. The company's response to restrict access is a mitigation measure. Since the misuse has already happened and regulatory actions are underway, this qualifies as an AI Incident due to violations of applicable laws and potential harm to communities or users through harmful content. The article focuses on the incident and regulatory response rather than just general AI news or future risks, so it is not merely Complementary Information or an AI Hazard.
Thumbnail Image

تقييد استخدام Grok للمشتركين فقط بعد "فضيحة صور الأطفال" - صحيفة الوئام

2026-01-09
صحيفة الوئام
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Grok) used to generate images, including illegal images of children, which constitutes direct harm and legal violations. The harms include violations of laws protecting children, potential human rights breaches, and harm to communities through the spread of illegal content. The system's use has directly led to these harms, qualifying this as an AI Incident. The company's response and regulatory actions are complementary information but do not negate the incident classification.
Thumbnail Image

بعد فضيحة Grok.. شركة xAI تُقيد هؤلاء من إنشاء الصور

2026-01-09
عكاظ
Why's our monitor labelling this an incident or hazard?
Grok is an AI-powered chatbot with image generation capabilities, which users exploited to create harmful and illegal content. This misuse directly led to violations of rights and societal harm, fulfilling the criteria for an AI Incident. The event involves the use and misuse of an AI system causing realized harm, not just potential harm or general information, so it is classified as an AI Incident.
Thumbnail Image

"Grok" تحصر ميزة توليد الصور للمشتركين بعد انتقادات بسبب إساءة الاستخدام - الوطن

2026-01-09
الوطن
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (Grok, an AI image generation and editing tool) whose misuse has raised regulatory concerns and potential for harm (e.g., production of unlawful content). However, the article does not describe any actual harm or incident caused by the AI system but rather the company's response to prevent such harms and comply with regulations. Therefore, this is Complementary Information as it details governance and mitigation responses to potential or past misuse without reporting a new AI Incident or AI Hazard.
Thumbnail Image

إندونيسيا تحجب تطبيق Grok بسبب نشر صور تنتهك المحتوى - اليوم السابع

2026-01-11
اليوم السابع
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Grok chatbot with AI-generated image capabilities) whose use has directly caused harm by producing and distributing inappropriate and harmful content, including images violating human rights and legal standards. This constitutes a violation of rights and legal obligations, fitting the definition of an AI Incident. The blocking of the app and regulatory investigations further confirm the recognition of actual harm caused by the AI system's outputs. Therefore, this is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

فضـ.ـيحة الصور المخلة تقوده للحظر.. أول دولة تحظر Grok رسميا

2026-01-11
صدى البلد
Why's our monitor labelling this an incident or hazard?
Grok is an AI system capable of generating content, including images. The reported generation of non-consensual sexual deepfake images constitutes a violation of human rights and harms individuals' dignity and security, fulfilling the criteria for harm (c) under AI Incident definitions. The event involves the use and misuse of the AI system leading to direct harm, with multiple countries taking regulatory and legal actions. Hence, it is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

إيلون ماسك يعيد ضبط استخدام Grok AI للصور بعد ردود فعل سلبية - اليوم السابع

2026-01-11
اليوم السابع
Why's our monitor labelling this an incident or hazard?
Grok AI is an AI system capable of generating and modifying images. The article reports that users created images that violated content standards, causing widespread negative reactions and prompting regulatory threats. The harm here is the creation and dissemination of inappropriate or illegal AI-generated images, which constitutes harm to communities and possibly breaches legal obligations. The platform's response to restrict features to paid users is a mitigation step but does not negate the fact that harm occurred. Hence, this event meets the criteria for an AI Incident due to realized harm caused by the AI system's use.
Thumbnail Image

إيلون ماسك يعيد ضبط استخدام Grok AI بعد ردود فعل سلبية - الإمارات نيوز

2026-01-11
الإمارات نيوز
Why's our monitor labelling this an incident or hazard?
Grok AI is an AI system used for generating and modifying images. The event involves the use of this AI system leading to the production of harmful or inappropriate content, which has caused public backlash and regulatory pressure. Although no specific incident of harm is detailed, the AI's role in generating harmful content is clear and has led to concrete changes in access and use policies. The regulatory threat and platform response indicate a recognized risk of harm. Since harm has occurred (production of inappropriate images) and the AI system's use led to this, this qualifies as an AI Incident rather than a mere hazard or complementary information.
Thumbnail Image

تطبيق Grok يثير الجدل بعد اتهامات بتوليد صور فاضحة مزيفة.. ما القصة؟

2026-01-10
جريدة البلاد
Why's our monitor labelling this an incident or hazard?
Grok is an AI system capable of generating deepfake images, which are AI-generated synthetic content. The misuse of this system to create non-consensual explicit images constitutes a violation of human rights and legal protections, fulfilling the criteria for harm under (c) violations of human rights and breach of legal obligations. The article details realized harm, including emotional distress and legal concerns, as well as regulatory responses. Therefore, this event qualifies as an AI Incident due to the direct harm caused by the AI system's use.
Thumbnail Image

ماسك يتهم لندن بقمع حرية التعبير مع تصاعد الجدل حول احتمال حظر"X"

2026-01-10
جريدة الدستور
Why's our monitor labelling this an incident or hazard?
The AI system Grok is explicitly mentioned as being used to create harmful sexualized images of women and children, including minors, without consent. This constitutes direct harm to individuals (sexual exploitation and abuse), a violation of rights, and harm to communities. The event describes actual harm occurring, not just potential harm, and the government's regulatory response confirms the seriousness of the incident. Therefore, this qualifies as an AI Incident under the framework because the AI system's use has directly led to significant harm.
Thumbnail Image

إندونيسيا تحظر Grok التابع لإيلون ماسك.. المحتوى فاضح

2026-01-10
بوابة الوفد الإلكترونية
Why's our monitor labelling this an incident or hazard?
Grok is an AI system capable of generating images and text, including explicit sexual content and non-consensual deepfakes. The Indonesian government's ban is due to the AI system's failure to prevent the production of illegal sexual content, which directly harms individuals' rights and digital security. The involvement of the AI system in generating harmful content that violates human rights and legal standards is explicit and ongoing, meeting the definition of an AI Incident. The article details actual harm and regulatory actions in response, not just potential risks or general information, so it is not a hazard or complementary information.
Thumbnail Image

بعد فضيحة الصور الإباحية.. أندونيسيا تحظر برنامج دردشة وبريطانيا تدرس

2026-01-10
صدى البلد
Why's our monitor labelling this an incident or hazard?
The AI system 'Grok' is explicitly mentioned as generating harmful sexual content, including illegal and non-consensual deepfake images, which directly harms individuals and violates human rights. The harms are realized and ongoing, with victims reporting psychological damage and governments responding with bans and investigations. The AI system's development and use have directly led to these harms, fulfilling the criteria for an AI Incident. The article does not merely discuss potential harm or responses but documents actual harm caused by the AI system's outputs.
Thumbnail Image

بعد موجة الغضب.. كيف استجاب "Grok" لفضيحة الصور العارية للنساء والأطفا

2026-01-10
مصراوي.كوم
Why's our monitor labelling this an incident or hazard?
The AI system ('Grok') was used to generate thousands of non-consensual explicit images, which constitutes a violation of human rights and harm to individuals and communities. This harm has already occurred, making it an AI Incident. The article details the direct link between the AI system's use and the harm caused, as well as the subsequent mitigation steps taken. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

استخدامات صادمة لـGrok.. مئات الصور المثيرة للجدل منتشرة على منصة إكس

2026-01-10
صدى البلد
Why's our monitor labelling this an incident or hazard?
The AI system Grok is explicitly involved in generating harmful sexualized images without consent, directly leading to violations of rights and harm to individuals, especially women and minors. This meets the criteria for an AI Incident because the AI's use has directly caused harm (violation of rights and harm to communities). The event is not merely a potential risk or a complementary update but a documented ongoing harm caused by the AI system's outputs and platform dynamics.
Thumbnail Image

ستارمر يهدد بحظر منصة X في بريطانيا بسبب صور Grok.. ما القصة؟ - اليوم السابع

2026-01-09
اليوم السابع
Why's our monitor labelling this an incident or hazard?
The AI system Grok is explicitly mentioned as generating illegal sexualized images, including of minors, which is a direct violation of laws and human rights protections. The harm is realized and ongoing, with regulatory bodies involved and government officials threatening platform bans. This fits the definition of an AI Incident because the AI's use has directly led to significant harm (violation of rights and creation of illegal content).
Thumbnail Image

فضيحة الصور العارية تهز "X".. ماذا يفعل Grok داخل منصة إيلون ماسك؟

2026-01-09
مصراوي.كوم
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system (Grok AI chatbot) to generate explicit images without consent, causing harm to individuals and communities. This constitutes a violation of human rights and legal protections, fulfilling the criteria for an AI Incident. The harm is realized and ongoing, with regulatory responses indicating the severity. Therefore, this event is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

جدل حرية التعبير يتصاعد بعد دعوة نواب أميركيين لحذف X وGrok من متاجر التطبيقات

2026-01-10
بوابة الوفد الإلكترونية
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the AI system Grok and its use in generating harmful content, which is a recognized AI-related harm. However, the article focuses on political calls for app removal, platform responses, and ongoing debates rather than a specific AI Incident where harm has already occurred or a concrete AI Hazard with imminent risk. The content is about governance, societal reactions, and platform policies, fitting the definition of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

بعد فضيحة Grok.. مطالب بإزالة تطبيق إكس من متاجر آبل وجوجل

2026-01-10
صدى البلد
Why's our monitor labelling this an incident or hazard?
The AI system Grok is explicitly mentioned as generating harmful sexualized images non-consensually, including of minors, which is a clear violation of human rights and legal protections against exploitation. The harm is occurring through the use of the AI system, and the platform's inadequate response has led to calls for removal of the app from major app stores. This meets the criteria for an AI Incident because the AI's use has directly led to significant harm to individuals and communities, including violations of rights and potential legal breaches.
Thumbnail Image

عاصفة deepfakes تهدد "إكس" بالإغلاق.. وماسك يرد على منتقديه

2026-01-10
عكاظ
Why's our monitor labelling this an incident or hazard?
The AI system Grok is explicitly mentioned as being used to create illegal and harmful deepfake content, including child sexual abuse images and non-consensual sexual images of adults. These outputs constitute direct harm to individuals and communities, as well as violations of legal and human rights frameworks. The harms are realized and ongoing, with regulatory investigations and calls for platform bans. Hence, the event meets the criteria for an AI Incident due to direct harm caused by the AI system's outputs.
Thumbnail Image

بعد عاصفة غضب عالمية.. إكس تغلق بابا خطيرا في الذكاء الاصطناعي

2026-01-10
صدى البلد
Why's our monitor labelling this an incident or hazard?
The Grok chatbot's image generation feature is an AI system capable of creating and modifying images. Its misuse to produce non-consensual sexualized images constitutes harm to individuals' rights and communities, fulfilling the criteria for an AI Incident. The involvement of regulatory bodies and the platform's reactive measures further confirm the materialized harm. Hence, this event is classified as an AI Incident due to the realized violations and harms caused by the AI system's use.
Thumbnail Image

تصعيد عالمي ضد Grok.. إندونيسيا تحظر التطبيق وأستراليا تندد بـ"المحتوى المروع"

2026-01-10
عكاظ
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Grok chatbot with image generation and modification capabilities) whose use has directly led to the production and dissemination of harmful sexual deepfake content without consent, violating human rights and causing harm to individuals and communities. The Indonesian ban and international regulatory responses confirm the materialization of harm. Therefore, this qualifies as an AI Incident due to direct harm caused by the AI system's outputs and its misuse or insufficient safeguards.
Thumbnail Image

بريطانيا تلوّح بحظر "إكس" بعد فضيحة صور عارية مزيفة بالذكاء الاصطناعي

2026-01-10
صدى البلد
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system ('Grok AI') used to generate harmful fake images targeting real individuals, including minors, which has caused realized harm such as psychological damage and violations of rights. The platform's failure to effectively prevent misuse and the government's regulatory response further confirm the direct link between the AI system's use and the harm. Therefore, this event meets the criteria for an AI Incident due to direct harm caused by the AI system's outputs and its misuse on a large scale.
Thumbnail Image

أي دول تستهدف روبوت الدردشة بالذكاء الاصطناعي "غروك" لإيلون ماسك؟

2026-01-13
euronews
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Grok) explicitly described as generating AI-based deepfake sexual content without consent, which has caused realized harm including violations of human rights, privacy breaches, and psychological and social damage. Multiple countries have responded with bans, investigations, and regulatory actions, confirming the harm is occurring and linked to the AI system's use and design failures. This fits the definition of an AI Incident because the AI system's use has directly led to significant harm to individuals and communities, including violations of fundamental rights.
Thumbnail Image

تحقيق بريطاني في "إكس" لإيلون ماسك بسبب صور جنسية بالذكاء الاصطناعي

2026-01-12
euronews
Why's our monitor labelling this an incident or hazard?
The AI system 'Grok' is explicitly mentioned as generating AI sexual images without consent, including child sexual content, which constitutes a violation of rights and harm to individuals and communities. The regulator's investigation is a response to these harms already occurring. The AI system's misuse has directly led to these harms, fulfilling the criteria for an AI Incident. The article focuses on the harm caused and regulatory action, not just potential harm or general AI news, so it is not a hazard or complementary information.
Thumbnail Image

إندونيسيا وماليزيا تحجبان مؤقتا Grok بعد انتشار صور جنسية على X

2026-01-12
مصراوي.كوم
Why's our monitor labelling this an incident or hazard?
The AI system Grok is directly responsible for generating harmful sexual deepfake images, including those involving minors and violence, which constitutes a violation of human rights and legal protections. The blocking of Grok by multiple governments is a response to these realized harms. Therefore, this event qualifies as an AI Incident because the AI system's use has directly led to significant harm (violation of rights and harm to individuals).
Thumbnail Image

عاجل/ لمنع المحتوى الإباحي: رئيس وزراء بريطانيا يهدد ويعلن.. - المصدر تونس

2026-01-13
المصدر تونس
Why's our monitor labelling this an incident or hazard?
The AI system Grok is explicitly mentioned as being used to create harmful sexual content, including images of children, which is a serious violation of laws and human rights. The regulatory body Ofcom is investigating due to reports of abuse, and political leaders are threatening to intervene to prevent further harm. The harm is direct and materialized, involving sexual exploitation and potential child abuse imagery, fitting the definition of an AI Incident under violations of rights and harm to communities.
Thumbnail Image

بريطانيا تفتح تحقيقًا رسميًا مع منصة إكس بسبب صور مزيفة وانتهاكات خطيرة

2026-01-13
بوابة الوفد الإلكترونية
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Grok chatbot) whose use has directly led to the creation and dissemination of illegal sexual images, including child exploitation content, which constitutes serious harm to individuals and communities. This meets the criteria for an AI Incident because the AI system's use has directly caused violations of human rights and harm to vulnerable groups. The investigation and regulatory responses are reactions to this realized harm, not merely potential or complementary information. Therefore, the classification is AI Incident.