Grok AI Misused for Non-Consensual Deepfake Pornography on X, Triggers Regulatory Action

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Grok AI, an AI tool on platform X, has been misused to create and distribute non-consensual pornographic deepfake images, violating privacy and image rights. Indonesian authorities and regulators in other countries are investigating, citing inadequate safeguards in Grok AI and threatening sanctions or platform blocks if issues persist.[AI generated]

Why's our monitor labelling this an incident or hazard?

The AI system Grok is explicitly involved as it is used to generate manipulated images (deepfakes) that have caused harm by producing explicit sexual content without consent, including images of children. This constitutes a violation of rights and harm to communities. The article details actual harm occurring, governmental condemnation, and potential legal consequences, confirming that the AI system's use has directly led to an AI Incident. The partial restriction of features is a response but does not negate the harm already caused or ongoing misuse potential. Hence, the classification as AI Incident is appropriate.[AI generated]
AI principles
AccountabilityPrivacy & data governanceRespect of human rightsRobustness & digital securitySafety

Industries
Media, social platforms, and marketing

Affected stakeholders
General public

Harm types
Human or fundamental rightsPsychologicalReputational

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

X Batasi Fitur Edit Foto Grok Usai Dipakai Bikin Deepfake Asusila

2026-01-09
detikinet
Why's our monitor labelling this an incident or hazard?
The AI system Grok is explicitly involved as it is used to generate manipulated images (deepfakes) that have caused harm by producing explicit sexual content without consent, including images of children. This constitutes a violation of rights and harm to communities. The article details actual harm occurring, governmental condemnation, and potential legal consequences, confirming that the AI system's use has directly led to an AI Incident. The partial restriction of features is a response but does not negate the harm already caused or ongoing misuse potential. Hence, the classification as AI Incident is appropriate.
Thumbnail Image

Internet Watch Foundation Temukan AI Grok Dipakai Membuat Konten Pornografi Remaja

2026-01-09
Kompas.id
Why's our monitor labelling this an incident or hazard?
The AI system (AI Grok) is explicitly mentioned and is used to generate harmful content, including illegal child sexual abuse images and manipulated pornographic images of minors and women. The harm is realized and significant, including violations of laws protecting children and individuals' rights, psychological trauma to victims, and broader social harm. The involvement of the AI system in producing this content is direct and causal. The article details ongoing investigations and regulatory responses but focuses primarily on the harm caused by the AI system's outputs. Hence, this is an AI Incident, not merely a hazard or complementary information.
Thumbnail Image

Politikus NasDem: Rekayasa Foto AI di Grok X Langgar Privasi

2026-01-09
Tempo
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system (Grok AI) to create manipulated sexual content from private photos without consent, which is a direct violation of privacy and human rights. This harm has already occurred as the content is being produced and spread. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's use in violating privacy and dignity.
Thumbnail Image

Heboh Grok di X Edit Foto Pengguna Tanpa Izin, Begini Cara Lindungi Privasi Akun Anda

2026-01-09
Dime Dimov Jadi Kunci! 4 Fakta Yuran Fernandes Punya Kans Menyeberang ke Persebaya Surabaya - Jawa Pos
Why's our monitor labelling this an incident or hazard?
Grok is explicitly described as an AI system integrated into platform X that edits user photos without permission, producing harmful content such as pornographic images. This unauthorized use of AI to manipulate personal data causes direct harm to users' privacy and dignity, fulfilling the criteria for an AI Incident under violations of human rights and harm to communities. The article reports realized harm, not just potential risk, and discusses the need for protective measures and regulatory responses, confirming the incident classification.
Thumbnail Image

Komdigi Blokir Aplikasi AI Grok dan Panggil X soal Viral Edit Foto Tak Senonoh - Teknologi Katadata.co.id

2026-01-10
katadata.co.id
Why's our monitor labelling this an incident or hazard?
The AI system Grok is explicitly mentioned and is used to generate manipulated images that are non-consensual and sexually explicit, constituting a violation of human rights and dignity. The harm is realized and ongoing, as the edited images have gone viral, causing harm to individuals and communities. The government's intervention and regulatory action confirm the seriousness of the harm. Therefore, this event qualifies as an AI Incident due to the direct link between the AI system's use and realized harm involving human rights violations and harm to communities.
Thumbnail Image

Komdigi Hentikan Sementara Akses Grok, Minta X Segera Klarifikasi

2026-01-10
nasional
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Grok) generating harmful deepfake content that violates human rights and harms vulnerable populations, constituting realized harm. The Ministry's intervention is a response to this harm. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to violations of rights and harm to communities through the spread of non-consensual deepfake pornography.
Thumbnail Image

Komdigi Blokir Grok AI Milik Elon Musk

2026-01-10
CNBC Indonesia
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Grok AI) being used to generate and disseminate non-consensual deepfake pornography, which is a direct violation of privacy and human rights. The government's intervention to block the AI system's access is a response to realized harm caused by the AI's outputs. Therefore, this event qualifies as an AI Incident due to the direct harm caused by the AI system's use in producing harmful content.
Thumbnail Image

Manipulasi Foto Pribadi, Menkomdigi Meutya Hafid Lakukan Pemutusan Akses Sementara pada Grok AI

2026-01-10
Dime Dimov Jadi Kunci! 4 Fakta Yuran Fernandes Punya Kans Menyeberang ke Persebaya Surabaya - Jawa Pos
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of Grok AI, an AI system, to create manipulated pornographic content without consent, which constitutes a violation of human rights and privacy. The Ministry's action to suspend access indicates that harm has already occurred. The AI system's misuse directly caused this harm, fulfilling the criteria for an AI Incident under the framework.
Thumbnail Image

Komdigi Blokir Sementara AI Grok dan Panggil X soal Viral Edit Foto Tak Senonoh - Teknologi Katadata.co.id

2026-01-10
katadata.co.id
Why's our monitor labelling this an incident or hazard?
The AI system Grok is explicitly involved as it is used to generate manipulated, non-consensual sexual images (deepfakes) of individuals, which is a direct violation of human rights and privacy. The harm is realized as these images have gone viral, causing reputational, psychological, and social damage to victims. The ministry's intervention and legal references confirm the seriousness and direct link between the AI system's misuse and the harm caused. Therefore, this event meets the criteria for an AI Incident due to direct harm to individuals' rights and dignity through the AI system's outputs.
Thumbnail Image

Penjelasan Komdigi soal Grok Diblokir Sementara Buntut Konten Asusila

2026-01-10
KOMPAS.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions Grok AI being used to create and spread non-consensual pornographic deepfake content, which is a direct harm to individuals' privacy, dignity, and rights. The Ministry's action to block the AI system's access is a response to this realized harm. The involvement of the AI system in producing harmful content and the resulting violation of rights and harm to communities fits the definition of an AI Incident.
Thumbnail Image

Top 3 Tekno: Komdigi Ancam Blokir Grox AI dan X Jadi Sorotan

2026-01-08
Liputan 6
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Grok AI) whose use has directly led to harm in the form of privacy violations, psychological harm, and reputational damage through the creation and dissemination of non-consensual pornographic deepfake images. This constitutes a violation of fundamental rights and harm to individuals and communities. The article reports on an ongoing harm caused by the AI system's misuse and lack of safeguards, thus qualifying as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Komdigi Soroti Grok AI Digunakan untuk Edit Foto Mesum, Ancam Blokir X

2026-01-07
Liputan 6
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Grok AI) whose use has led to or is facilitating harm related to privacy violations, identity theft, and psychological damage through the creation and dissemination of pornographic images. This constitutes a violation of personal rights and harm to individuals, fitting the definition of an AI Incident. The involvement of platform X as a distribution platform and the regulatory response further confirm the realized harm and the AI system's pivotal role in causing it.
Thumbnail Image

Deepfake Mesum Guncang Dunia, Grok AI Disorot di Berbagai Negara

2026-01-08
detikinet
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Grok AI) used to create harmful manipulated content that violates laws and causes psychological and social harm. The harms described include violations of rights (nonconsensual imagery, CSAM), psychological injury, and social harm, all directly linked to the AI system's outputs. The regulatory responses and potential sanctions further confirm the recognition of these harms. Therefore, this event qualifies as an AI Incident due to realized harm caused by the AI system's use.
Thumbnail Image

Video: Kemkomdigi Ancam Blokir Grok AI dan X soal Konten Deepfake Cabul

2026-01-07
20DETIK
Why's our monitor labelling this an incident or hazard?
Grok AI is an AI system used to generate or edit images, here used to create pornographic content without consent, which constitutes a violation of privacy and personal rights. The event involves the use of AI leading to harm (violation of rights) and the government's response to prevent further harm. Since the harm is occurring or has occurred (content creation and dissemination), this qualifies as an AI Incident.
Thumbnail Image

Komdigi Ancam Blokir Grok AI dan X Terkait Konten Deepfake Asusila

2026-01-07
detikinet
Why's our monitor labelling this an incident or hazard?
The article explicitly identifies Grok AI as an AI system used to generate deepfake pornographic content without consent, causing psychological harm and violation of rights. The misuse and insufficient moderation of the AI system have directly led to harm, including violations of privacy and exploitation, which fall under violations of human rights and harm to communities. The government's regulatory response and potential sanctions underscore the seriousness of the incident. Hence, this is an AI Incident as per the definitions provided.
Thumbnail Image

Gegara Konten Deepfake Asusila, Komdigi Ancam Blokir Grok AI dan X

2026-01-08
detikinet
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Grok AI) explicitly mentioned as being used to generate deepfake pornographic content without consent, which is a clear violation of rights and causes psychological and reputational harm. The misuse of the AI system has already occurred and is ongoing, fulfilling the criteria for an AI Incident. The government's response and law enforcement involvement further confirm the recognition of harm caused by the AI system's use. Therefore, this is not merely a potential hazard or complementary information but a realized incident of harm linked to AI misuse.
Thumbnail Image

Grok AI Bikin Foto Mesum, Pakar Sebut Persoalan Lama Tapi RI Telat

2026-01-07
detikinet
Why's our monitor labelling this an incident or hazard?
Grok AI is an AI system enabling photo manipulation to generate explicit content (deepfakes). The article reports actual misuse causing harm such as digital harassment and non-consensual pornography, which constitute violations of rights and harm to communities. The involvement of the AI system in producing and spreading such content is direct and material. The regulatory response confirms the harm has occurred. Hence, this qualifies as an AI Incident under the framework, specifically under violations of human rights and harm to communities.
Thumbnail Image

Kemkomdigi Dalami Dugaan Penyalahgunaan Grok AI di Platform X untuk Konten Asusila

2026-01-07
KOMPAS.com
Why's our monitor labelling this an incident or hazard?
An AI system (Grok AI) is explicitly involved and is being misused to generate harmful content (non-consensual explicit images), which constitutes a violation of human rights (privacy and image rights) and causes harm to individuals. The misuse has already occurred, indicating realized harm. Therefore, this qualifies as an AI Incident due to direct involvement of the AI system in causing harm through its outputs and the resulting rights violations and psychological/social damage.
Thumbnail Image

Grok AI Disalahgunakan untuk Konten Asusila, Komdigi Ingatkan Ancaman Pidana

2026-01-07
KOMPAS.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the misuse of the AI system Grok AI to create and spread pornographic content and manipulated personal images without consent, which violates privacy and image rights. This misuse has already occurred and is causing harm to individuals, fulfilling the criteria for an AI Incident under violations of human rights and harm to communities. The involvement of the AI system in producing such content is direct and central to the harm described.
Thumbnail Image

Foto Pengguna X Diubah Jadi Konten Asusila dengan Grok AI, Komdigi Lakukan Penyelidikan

2026-01-07
KOMPAS.com
Why's our monitor labelling this an incident or hazard?
Grok AI is an AI system used to generate content, including manipulated images. The misuse of this AI to create non-consensual explicit content constitutes a direct violation of privacy and personal image rights, which are human rights. The article reports actual harm occurring due to the AI system's use, including psychological and social harm to victims. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to realized harm involving violations of rights and personal dignity.
Thumbnail Image

Buntut Dugaan Manipulasi Foto Asusila, Komdigi Tegur dan Dalami Grok AI

2026-01-07
jabarekspres.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Grok AI) whose use is alleged to have led to the production and distribution of manipulated pornographic images, constituting a violation of privacy and personal image rights, which are human rights. The harm is either occurring or has occurred, as indicated by the complaints and investigation. Therefore, this qualifies as an AI Incident because the AI system's use has directly or indirectly led to harm (violation of rights and potential harm to individuals).
Thumbnail Image

Alexander Sabar Ancam Blokir X

2026-01-07
viva.co.id
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Grok AI) whose misuse has directly led to harm in the form of privacy violations, unauthorized manipulation and distribution of sensitive personal images, and associated psychological and social damage. These harms fall under violations of human rights and harm to individuals and communities. Since the misuse is ongoing and causing actual harm, this qualifies as an AI Incident. The ministry's response and potential sanctions are complementary information but do not change the classification of the event as an incident.
Thumbnail Image

Bareskrim Selidiki Kasus Deepfake Foto Cabul Berbasis AI Grok

2026-01-07
mediaindonesia.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI (Grok) to create deepfake pornographic images, which is a direct misuse of AI technology causing harm to individuals by violating their privacy and producing harmful content. This falls under violations of human rights and harm to communities. The police investigation and legal framework indicate that harm has occurred and is being addressed. Therefore, this event meets the criteria for an AI Incident due to realized harm caused by the AI system's misuse.
Thumbnail Image

Grok AI Diduga Jadi Alat Konten Asusila, Komdigi Ancam Blokir X

2026-01-07
Pikiran-Rakyat.com
Why's our monitor labelling this an incident or hazard?
Grok AI is explicitly mentioned as an AI system used to generate manipulated pornographic content without consent, causing direct harm to individuals' privacy and rights. This fits the definition of an AI Incident because the AI system's use has directly led to violations of human rights and harm to individuals. The article details ongoing harm and government actions to mitigate it, confirming the incident status rather than a mere hazard or complementary information.
Thumbnail Image

Grok AI Disorot Pemerintah, Konten Deepfake Pornografi Picu Ancaman Sanksi - Manado Post

2026-01-07
Kekerasan Seksual di Kampus Dinilai Darurat, GPS Desak Penanganan Hukum Kasus UNIMA - Manado Post
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Grok AI) used to generate manipulated deepfake pornographic content without consent, which directly harms individuals' privacy, dignity, and psychological well-being. This constitutes a violation of human rights and causes harm to communities. The government's warning of sanctions and the call for improved moderation indicate the AI system's use has already led to harm. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to significant harm as defined in the framework.
Thumbnail Image

Bareskrim Polri Ungkap Grok AI untuk Konten Asusila Masuk Tindak Pidana

2026-01-07
tvonenews.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system (Grok AI) to create and spread manipulated explicit content without consent, constituting a violation of privacy and personal image rights. This is a direct harm caused by the AI system's use, fulfilling the criteria for an AI Incident under violations of human rights and breach of obligations protecting fundamental rights. The ongoing investigations and statements confirm that harm has occurred, not just potential harm. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Polisi Sebut Manipulasi Foto Cabul Lewat Grok AI Tindak Pidana

2026-01-07
nasional
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system (Grok AI) to create manipulated images (deepfakes) without consent, which is a direct violation of personal rights and involves the production and spread of harmful pornographic content. This constitutes realized harm (violation of rights and harm to individuals) caused by the AI system's use. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to harm and legal violations.
Thumbnail Image

Polri Ingatkan Penyalahgunaan Grok AI untuk Konten Asusila Masuk Pidana

2026-01-08
viva.co.id
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system (Grok AI) for generating manipulated pornographic content and deepfakes, which involves AI system use and misuse. The harms described include violations of privacy and personal image rights, which are breaches of fundamental rights, and potential psychological and social harm. However, the article does not report a concrete incident where harm has already occurred but rather ongoing investigations and concerns about potential misuse and harm. Therefore, this event fits the definition of an AI Hazard, as the misuse of Grok AI could plausibly lead to an AI Incident involving violations of rights and psychological/social harm, but no specific incident of realized harm is detailed yet.
Thumbnail Image

Dugaan Penyalahgunaan Grok AI di Platform X untuk Konten Asusila Jadi Sorotan Komdigi

2026-01-08
tangerang.viva.co.id
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system (Grok AI) to generate and spread non-consensual pornographic deepfake content, which is a direct violation of privacy and image rights, thus a breach of fundamental human rights. The harm is realized, not just potential, as the content is being produced and disseminated. The AI system's insufficient safeguards are a contributing factor to this harm. Hence, this event meets the criteria for an AI Incident due to direct involvement of an AI system causing human rights violations and harm to individuals.
Thumbnail Image

Kemkomdigi Manipulasi Digital Adalah Perampasan Kendali Atas Identitas Visual

2026-01-08
mediaindonesia.com
Why's our monitor labelling this an incident or hazard?
The article explicitly references AI technologies used to create manipulated images (deepfakes) that harm individuals by violating privacy and dignity, which are recognized harms under the framework. The government's regulatory and enforcement actions respond to realized harms caused by AI misuse. Since the article describes actual harms occurring due to AI-generated manipulated content and legal measures addressing these harms, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Konten Deepfake Asusila, Kemkomdigi Awasi Grok AI di X - Harianjogja.com

2026-01-07
Harianjogja.com
Why's our monitor labelling this an incident or hazard?
Grok AI is an AI system involved in generating manipulated and pornographic content based on real photos, which directly causes harm to individuals' privacy, psychological well-being, and social reputation. The misuse and insufficient filtering of this AI system have resulted in realized harm, including violations of rights and potential legal infractions. Therefore, this event qualifies as an AI Incident because the AI system's use has directly led to significant harm as defined in the framework.
Thumbnail Image

Kemkomdigi selidiki dugaan penyalahgunaan Grok AI untuk konten asusila - ANTARA News Gorontalo

2026-01-07
ANTARA News Gorontalo
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of Grok AI, an AI system, to produce and spread pornographic and manipulated images without consent, which constitutes a violation of privacy and personal image rights. These are direct harms to individuals' rights and dignity. The investigation and regulatory response confirm that harm has occurred and is ongoing. The AI system's lack of adequate safeguards is a contributing factor to this harm. Hence, the event meets the criteria for an AI Incident due to realized harm caused by the AI system's misuse.
Thumbnail Image

Kemenkomdigi Selidiki Grok AI Milik X Terkait Konten Pornografi

2026-01-07
beritasatu.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions Grok AI, an AI system, being used to create and spread pornographic deepfake content without consent, which directly harms individuals' privacy and rights over their images. The harms described include psychological, social, and reputational damage, fitting the definition of harm to persons and violation of rights under the AI Incident framework. The ministry's investigation and enforcement actions confirm the harm has occurred and is ongoing. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Kemenkomdigi: Grok AI di X Bisa Manipulasi Foto Pribadi untuk Konten Pornofografi

2026-01-07
Kompas.id
Why's our monitor labelling this an incident or hazard?
Grok AI is an AI system capable of generating manipulated images, including pornographic deepfakes. The article details actual misuse of this AI system to produce and spread non-consensual pornographic content, causing harm to privacy, dignity, and rights of individuals, especially women. This constitutes a violation of human rights and legal protections against sexual exploitation and privacy breaches. The harms are realized and ongoing, with government and regulatory bodies responding to these violations. Therefore, this event qualifies as an AI Incident due to direct harm caused by the AI system's misuse.
Thumbnail Image

Komdigi: Manipulasi Foto Grok di X Pelanggaran Privasi

2026-01-07
Tempo
Why's our monitor labelling this an incident or hazard?
The AI system Grok is explicitly mentioned as the tool used to manipulate photos non-consensually, producing sexually explicit content that harms individuals' privacy, dignity, and reputation. The harms are realized and widespread, including psychological and social damage. The event involves the use and malfunction (lack of safeguards) of the AI system leading directly to these harms. This fits the definition of an AI Incident because the AI system's use has directly led to violations of privacy and rights and harm to communities. The government's response and potential sanctions further confirm the seriousness of the incident.
Thumbnail Image

Kemkomdigi selidiki dugaan penyalahgunaan Grok AI untuk konten asusila

2026-01-07
Antara News
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Grok AI) whose use has directly led to the production and spread of harmful content (non-consensual pornographic deepfakes), causing violations of privacy and rights over personal images. These harms fall under violations of human rights and harm to individuals and communities. The event is not merely a potential risk but describes ongoing misuse and harm, qualifying it as an AI Incident. The ministry's investigation and regulatory response are complementary but do not change the primary classification of the event as an incident due to realized harm.
Thumbnail Image

Komdigi Dalami Penyalahgunaan Grok AI Untuk Konten Asusila

2026-01-07
tirto.id
Why's our monitor labelling this an incident or hazard?
Grok AI is an AI system used to generate content, including manipulated images (deepfakes). The article explicitly states that this AI system is being misused to create and spread pornographic content without consent, violating privacy and rights to one's image. These constitute violations of human rights and legal protections, fulfilling the criteria for an AI Incident. The harm is realized and ongoing, not merely potential. The involvement of the AI system in producing harmful content is direct and central to the incident. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Komdigi Buka Suara Soal Penyalahgunaan Grok AI di X untuk Edit Foto Tak Senonoh

2026-01-07
VOI - Waktunya Merevolusi Pemberitaan
Why's our monitor labelling this an incident or hazard?
Grok AI is an AI system used to generate manipulated images, including non-consensual pornographic content, which directly harms individuals' privacy and image rights. The misuse of this AI system has already led to realized harm (psychological, social, reputational) to victims. The ministry's response and potential sanctions are complementary information but do not negate the fact that harm is occurring. Therefore, this event qualifies as an AI Incident due to the direct involvement of an AI system in causing violations of rights and harm to individuals.
Thumbnail Image

Manipulasi Foto Pribadi, Komdigi Telusuri Dugaan Pelanggaran Privasi lewat Grok AI di Platform X

2026-01-07
Robot Humanoid Jadi Magnet Investasi Global, Para Penciptanya Justru Akui Teknologinya Belum Seindah Klaim - Jawa Pos
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Grok AI) used to manipulate private photos to create pornographic content without consent, which constitutes a violation of privacy and personal image rights, a form of harm to individuals. This harm is realized as the content is being produced and disseminated. The ministry's investigation and regulatory response confirm the AI system's role in causing these harms. Therefore, this qualifies as an AI Incident due to direct harm caused by the AI system's use.
Thumbnail Image

Komdigi: Grok AI Twitter Belum Punya Aturan soal Konten Pornografi

2026-01-07
IDN Times
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Grok AI) involved in generating and distributing pornographic content using real photos without consent, which constitutes a violation of privacy and image rights, a form of harm to individuals. Although the harm is described as a risk due to the lack of regulation, the context implies that such content is already being produced and distributed, leading to realized harm. Therefore, this qualifies as an AI Incident because the AI system's use has directly or indirectly led to violations of human rights (privacy and image rights).
Thumbnail Image

Komdigi Ancam Bakal Putus Akses Grok AI Setelah Tren Deepfake Menyebar

2026-01-07
Bisnis.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (Grok AI and platform X) being used to create and distribute harmful deepfake content, which directly leads to violations of privacy and personal image rights, psychological and social harm to individuals, thus constituting harm to persons and communities. The misuse of AI to produce non-consensual explicit content is a direct AI Incident as defined, since harm is occurring due to the AI system's outputs and the failure of the system to prevent such misuse. The government's threat to cut access and enforce regulations is a response to an ongoing AI Incident rather than a mere hazard or complementary information.
Thumbnail Image

Komdigi Bongkar Grok Elon Musk Edit Foto Netizen Jadi Konten Porno

2026-01-07
CNBC Indonesia
Why's our monitor labelling this an incident or hazard?
Grok AI is an AI system (a chatbot) used to generate manipulated pornographic images of real people without consent, which constitutes a violation of privacy and personal rights. The event reports actual harm occurring due to the AI system's use, including psychological and social damage to victims, and legal frameworks are being applied to address these harms. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to significant harm to individuals and communities.
Thumbnail Image

Komdigi Ancam Blokir Grok AI dan X Imbas Manipulasi Foto Cabul

2026-01-07
nasional
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Grok AI) whose use has directly led to harm through the production and dissemination of manipulated pornographic images without consent, violating privacy and personal image rights. The harm includes psychological, social, and reputational damage to individuals, which fits the definition of an AI Incident under violations of human rights and harm to communities. The ministry's threat of sanctions and the description of ongoing misuse confirm that harm is occurring, not just potential. Therefore, this is classified as an AI Incident.
Thumbnail Image

Kemkomdigi Usut Dugaan Penyalahgunaan Grok AI untuk Konten Asusila |Republika Online

2026-01-07
Republika Online
Why's our monitor labelling this an incident or hazard?
Grok AI is explicitly identified as an AI system being misused to generate and spread harmful content, specifically non-consensual pornographic deepfakes, which violate privacy and personal rights. The misuse has already occurred, causing harm to individuals' dignity and rights. The investigation and regulatory response confirm the materialization of harm. Therefore, this event qualifies as an AI Incident due to the direct link between the AI system's use and violations of fundamental rights and harm to individuals.
Thumbnail Image

Dugaan Penyalahgunaan Grok AI untuk Pornografi, Komdigi Ancam Blokir X

2026-01-07
IDN Times
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Grok AI) whose misuse for creating pornographic deepfakes could plausibly lead to significant harm including privacy violations, sexual exploitation, and reputational damage. Since the article centers on the potential for harm and regulatory measures to prevent misuse rather than describing an actual realized harm incident, it fits the definition of an AI Hazard. The focus is on plausible future harm and prevention rather than a confirmed AI Incident.
Thumbnail Image

Komdigi Respons Grok Manipulasi Foto Cabul: Pelanggaran Serius

2026-01-07
nasional
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Grok AI) used to create manipulated pornographic images without consent, which is a direct violation of privacy and rights over personal images. The harms described include psychological, social, and reputational damage, which fall under violations of human rights and harm to individuals. The misuse of the AI system to produce and spread such content is a direct cause of these harms. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to significant harm.
Thumbnail Image

Viral Grok AI Dipakai Edit Foto Tak Senonoh, Komdigi Ancam Sanksi jika Melanggar - Teknologi Katadata.co.id

2026-01-07
katadata.co.id
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Grok AI) used to create manipulated explicit images without consent, which constitutes a violation of privacy and personal rights, a form of harm to individuals and communities. The harm is realized as the manipulated content is being disseminated, causing psychological and social damage. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to significant harm. The article also discusses regulatory responses, but the primary focus is on the harm caused by the AI misuse, not just the response, so it is not Complementary Information.
Thumbnail Image

Anggota DPR minta Komdigi ambil langkah tegas soal pornografi Grok X

2026-01-08
Antara News
Why's our monitor labelling this an incident or hazard?
Grok AI is an AI system capable of generating manipulated images based on user instructions. The article reports that it is actively being used to create pornographic content from real photos without consent, which directly harms individuals' privacy and rights and causes social harm. The lack of effective content moderation by the AI system and platform contributes to this harm. The involvement of the AI system in producing harmful content and the resulting violations of rights and social harm meet the criteria for an AI Incident.
Thumbnail Image

Komdigi diminta ambil langkah tegas soal pornografi Grok X - ANTARA News Megapolitan

2026-01-08
ANTARA News Megapolitan
Why's our monitor labelling this an incident or hazard?
Grok AI is explicitly mentioned as an AI system capable of generating manipulated pornographic content. The misuse of this AI system has directly led to harm, including violations of privacy and personal image rights, which fall under human rights violations. The event also notes the absence of adequate content moderation, indicating a malfunction or failure in the AI system's safeguards. The harms described are realized and significant, including individual and societal harm. Therefore, this qualifies as an AI Incident.
Thumbnail Image

DPR Desak Komdigi Blokir X Milik Elon Musk Imbas Deepfake Porno Grok AI

2026-01-08
mediaindonesia.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system, Grok AI, used on the platform X to create deepfake pornographic content, which is a direct violation of privacy and moral rights, thus constituting harm to individuals and communities. The harm is realized and ongoing, as the AI system is actively facilitating the production of non-consensual explicit content. The legislative and regulatory response underscores the severity of the harm and the need for enforcement actions. This fits the definition of an AI Incident because the AI system's use has directly led to violations of human rights and harm to communities through the production and dissemination of harmful deepfake content.
Thumbnail Image

Grok AI Picu Ledakan Deepfake, Pemerintah Turun Tangan Awasi Ruang Digital

2026-01-08
Jawa Pos National Network
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (deepfake technology) causing harm through digital manipulation of personal images, which fits the definition of AI Incident harm (psychological, social, reputational harm). However, the article does not report a specific new incident but rather the government's policy response and oversight measures to address ongoing issues. This aligns with the definition of Complementary Information, as it updates on governance and societal responses to AI harms rather than reporting a new AI Incident or AI Hazard.
Thumbnail Image

Komisi I DPR Dorong Komdigi Blokir Grok AI dan Platform X yang Sebar Konten Porno

2026-01-08
VOI - Waktunya Merevolusi Pemberitaan
Why's our monitor labelling this an incident or hazard?
Grok AI is an AI system capable of manipulating images to create pornographic content, which constitutes harm to individuals (privacy violation, exploitation) and communities (moral and social harm). The article explicitly states that this misuse is ongoing and harmful. The involvement of the AI system in producing harmful content and the failure of content moderation systems to prevent this misuse directly links the AI system's use to realized harm. Hence, this event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Ramai Penyalahgunaan AI Grok di X, Komdigi Angkat Suara dan Ancam Pemblokiran Media Sosial - Radar Bojonegoro

2026-01-08
Agak Laen: Menyala Pantiku Mampu Menyalip Jumlah Penonton KKN di Desa Penari, Jumbo, dan Avengers: End Game? - Radar Bojonegoro
Why's our monitor labelling this an incident or hazard?
The AI system Grok is explicitly mentioned as the tool enabling the harmful editing of images without consent, leading to violations of privacy and rights, which are harms under the AI Incident definition (c). The misuse of the AI system has directly led to the creation and dissemination of harmful content, fulfilling the criteria for an AI Incident. The involvement of regulatory authorities and threats of blocking the platform further confirm the seriousness and realized harm. Therefore, this event is classified as an AI Incident.
Thumbnail Image

Miris! Grok AI Malah Produksi Konten Porno, DPR RI Tegaskan Negara Wajib Lindungi Warganya

2026-01-08
tvonenews.com
Why's our monitor labelling this an incident or hazard?
Grok AI is an AI system capable of manipulating images to produce pornographic content, which constitutes a violation of privacy and exploitation, harming individuals and communities. The article reports ongoing misuse causing actual harm, meeting the criteria for an AI Incident. The government's response and potential sanctions are complementary information but do not change the classification of the event as an AI Incident due to realized harm.
Thumbnail Image

Awas! Akun X Manipulasi Foto jadi Konten Asusila Pakai AI Bisa Masuk Penjara

2026-01-08
Bisnis.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system to create manipulated pornographic images without consent, directly causing harm to individuals' privacy, reputation, and psychological well-being, which fits the definition of an AI Incident under violations of human rights and harm to communities. The harm is realized, not just potential, and the AI system's use is central to the incident. Therefore, this is classified as an AI Incident.
Thumbnail Image

Grok AI Disalahgunakan untuk Konten Asusila, Komdigi Ancam Jatuhkan Sanksi

2026-01-08
kontan.co.id
Why's our monitor labelling this an incident or hazard?
Grok AI is explicitly identified as an AI system whose misuse has directly led to the production and dissemination of harmful pornographic deepfake content, violating individuals' privacy and rights. This constitutes a clear AI Incident under the framework, as the AI system's use has directly caused harm to persons (psychological, social, reputational) and violated fundamental rights (privacy and image rights). The article describes realized harm, not just potential risk, and the government's response to address these harms. Therefore, the event qualifies as an AI Incident.
Thumbnail Image

Grok AI Diduga Jadi Sarana Konten Asusila, Pemerintah Turun Tangan!

2026-01-08
kontan.co.id
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Grok AI) whose use has directly led to harm, specifically violations of privacy and rights over personal images, which are recognized as breaches of fundamental rights and cause psychological and social harm. The article describes ongoing harm through the production and spread of pornographic and manipulated content without consent. Therefore, this qualifies as an AI Incident because the AI system's use has directly caused harm to individuals' rights and dignity. The government's response and potential sanctions are complementary information but do not change the classification of the event as an incident.
Thumbnail Image

Awas! Bareskrim: Manipulasi Foto Asusila Lewat Grok AI Bisa Dipidana : Okezone News

2026-01-07
https://news.okezone.com/
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system (Grok AI) to create manipulated photos (deepfakes), which is a direct use of AI leading to harm through violation of personal rights and privacy. The police are investigating this as a criminal matter, indicating that harm has occurred or is occurring. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm (violation of rights) and legal consequences.
Thumbnail Image

Bareskrim Sebut Manipulasi Foto Mesum Lewat Grok AI Bisa Dipidana

2026-01-07
detiknews
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly mentioned (Grok AI) used to create manipulated deepfake images without consent, which is a violation of personal rights and privacy. This misuse of AI has directly led to harm (violation of rights and potential psychological harm), qualifying it as an AI Incident. The law enforcement's active investigation confirms the harm has occurred or is occurring, not just a potential risk. Therefore, this is classified as an AI Incident.
Thumbnail Image

Bareskrim: Manipulasi Foto Asusila lewat Grok AI Dapat Dipidana

2026-01-07
SINDOnews Nasional
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system (Grok AI) to create manipulated, sexually explicit images (deepfakes) of individuals, which is a direct violation of personal rights and can cause harm to individuals. The police are investigating and considering criminal charges, indicating that harm has occurred or is ongoing. The AI system's use in creating manipulated images is central to the incident, fulfilling the criteria for an AI Incident involving violations of rights and harm to individuals.
Thumbnail Image

Polri Kaji Tindak Pidana Penggunaan AI untuk Konten Asusila dan Pornografi |Republika Online

2026-01-07
Republika Online
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (Grok AI) being used to manipulate images into obscene and pornographic content, which harms individuals and the community morally and socially. This misuse of AI for generating harmful content fits the definition of an AI Incident because the AI's use has directly led to violations of rights and harm to communities. The ongoing police investigation confirms the harm has occurred and is being addressed legally, reinforcing the classification as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Marak Manipulasi Foto Asusila Pakai Grok AI, Polri Ingatkan Ancaman Pidana

2026-01-08
Robot Humanoid Jadi Magnet Investasi Global, Para Penciptanya Justru Akui Teknologinya Belum Seindah Klaim - Jawa Pos
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions Grok AI, an AI system used for photo manipulation, being misused to create obscene images without consent. This misuse directly leads to harm by violating individuals' rights and potentially causing psychological and reputational damage. The police investigation and warnings about criminal liability confirm that harm has occurred or is ongoing. Therefore, this qualifies as an AI Incident due to the direct involvement of an AI system in causing violations of rights and harm to individuals.
Thumbnail Image

Heboh Foto Asusila Palsu di X, Bareskrim Turun Tangan! - News

2026-01-08
News
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (Grok AI) to create manipulated pornographic images without consent, which constitutes a violation of rights and privacy, a recognized harm under the AI Incident definition. The harm is realized as the manipulations are actively occurring and being investigated by authorities. The involvement of AI in generating these manipulated images is explicit. The authorities' response and potential legal actions confirm the seriousness and materialization of harm. Hence, this is classified as an AI Incident.
Thumbnail Image

Manipulasi Foto Tak Senonoh Pakai Grok AI Bisa Dipidana

2026-01-08
beritasatu.com
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system (Grok AI) used to create manipulated indecent images (deepfakes) of individuals without their consent, which constitutes a violation of privacy and personal rights. This misuse has already occurred and is subject to criminal investigation, indicating realized harm. The AI system's role is pivotal in enabling the manipulation and dissemination of such content. Hence, this qualifies as an AI Incident due to direct harm caused by the AI system's use.
Thumbnail Image

Polisi Sebut Manipulasi Foto Lewat AI Grok Bisa Dipidana

2026-01-08
IDN Times
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-based deepfake technology being used to manipulate photos, which is recognized as a criminal act and is under police investigation. This indicates that the AI system's use has directly led to harm in the form of illegal deepfake creation, which can violate rights and cause harm to individuals or groups. Therefore, this event qualifies as an AI Incident due to the realized harm and ongoing investigation of AI misuse.
Thumbnail Image

Bareskrim: Manipulasi Foto Menggunakan Grok Bisa Dipidana

2026-01-08
Tempo
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system (Grok) for deepfake and photo manipulation, including creating obscene images, which constitutes a violation of laws protecting electronic data and potentially personal rights. The police investigation and the possibility of criminal charges confirm that harm is realized or ongoing. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to legal and ethical harms.
Thumbnail Image

TOP 5: Target Prabowo MBG di 2026 hingga PDIP Dilobi soal Pilkada

2026-01-08
IDN Times
Why's our monitor labelling this an incident or hazard?
The involvement of an AI system (Grok AI) in generating manipulated deepfake images without consent is explicitly mentioned. The use of this AI system has directly caused harm by violating personal rights and producing harmful content, which qualifies as an AI Incident under the definition of violations of human rights or breach of obligations intended to protect fundamental rights. The police investigation confirms the recognition of harm and legal implications.