AI-Generated 'First Day in Hell' Videos Spark Religious Outrage in Indonesia and Malaysia

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

AI-generated videos depicting 'the first day in hell' as humorous content have circulated widely on social media, provoking strong condemnation from Indonesian and Malaysian religious authorities. The videos are considered misleading and offensive to religious beliefs, prompting calls for their removal and legal action against the creators.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event explicitly involves AI-generated video content that misrepresents and ridicules religious beliefs, leading to significant social and cultural harm. The AI system's use in creating these videos directly led to harm by misleading and offending religious communities, which fits the definition of harm to communities and violation of rights. The condemnation by religious bodies and calls for legal action further confirm the recognition of harm. Hence, this is an AI Incident rather than a hazard or complementary information.[AI generated]
AI principles
Respect of human rights

Industries
Media, social platforms, and marketing

Affected stakeholders
General public

Harm types
PsychologicalHuman or fundamental rightsPublic interest

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

MUI - PBNU Kecam Video AI "Hari Pertama di Neraka": Konten Menyesatkan

2025-06-10
detiksumbagsel
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated video content that misrepresents and ridicules religious beliefs, leading to significant social and cultural harm. The AI system's use in creating these videos directly led to harm by misleading and offending religious communities, which fits the definition of harm to communities and violation of rights. The condemnation by religious bodies and calls for legal action further confirm the recognition of harm. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

MUI Kecam Video AI 'Hari Pertama di Neraka': Menyesatkan!

2025-06-07
detik News
Why's our monitor labelling this an incident or hazard?
The videos were explicitly created using AI, as stated in the article. The content misrepresents religious doctrine, which religious leaders argue damages faith and misleads the community, especially younger or less knowledgeable individuals. This harm to religious belief and community values fits within the framework of violations of human rights and harm to communities. The event describes realized harm caused by the AI-generated content, not just potential harm. Hence, it qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Heboh Video AI 'Hari Pertama di Neraka', Anggota DPR Merasa Miris

2025-06-09
detik News
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used to generate visual content that has sparked moral and ethical concerns, particularly regarding religious respect and community values. While the videos have caused public unease and a call for responsible use of AI, the article does not report any realized harm such as injury, rights violations, or social disruption directly caused by the AI content. The main issue is the potential for future harm through misinformation or cultural/religious harm. Therefore, this situation fits the definition of an AI Hazard, as the AI-generated content could plausibly lead to harm if not managed responsibly, but no direct harm has been documented yet.
Thumbnail Image

Geger Video AI 'Hari Pertama di Neraka', MUI Sebut Konten Menyesatkan

2025-06-10
CNNindonesia
Why's our monitor labelling this an incident or hazard?
The videos are explicitly described as AI-generated content that misrepresents religious teachings, which has led to public outcry and calls for legal prosecution. The harm is realized as it affects religious faith and community sentiments, which falls under violations of human rights and harm to communities. The AI system's use in generating misleading religious content directly led to this harm. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Viral Video YouTube Hari Pertama di Neraka, MUI Warning Umat Islam

2025-06-10
CNBCindonesia
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated content (AI system involvement) that has been used to create videos depicting religious themes in a way that is considered misleading and harmful to the faith of a community. The harm is to the religious beliefs and potentially to the community's cohesion and respect for sacred values, which falls under harm to communities and violations of rights (religious rights). The harm is realized as the videos are already circulating and causing concern, not just a potential risk. Therefore, this qualifies as an AI Incident due to the direct use of AI in creating content that harms community values and religious rights.
Thumbnail Image

MUI Kecam Video Editan AI soal Hari Pertama di Neraka, Minta Dihapus dan Pembuatnya Dihukum Pidana

2025-06-08
JawaPos.com
Why's our monitor labelling this an incident or hazard?
An AI system was used to create the edited video content, which directly led to harm in the form of religious offense and potential degradation of faith among the community, constituting harm to communities and violation of religious rights. The AI's role in generating misleading and offensive content is pivotal to the incident. Therefore, this qualifies as an AI Incident due to the realized harm caused by the AI-generated content.
Thumbnail Image

Video AI Bertema Neraka Picu Kecaman Netizen dan Tokoh Agama di Indonesia-Malaysia

2025-06-10
siap.viva.co.id
Why's our monitor labelling this an incident or hazard?
The video is explicitly stated to be generated by AI technology, fulfilling the AI system involvement criterion. The harm is social and cultural offense, which falls under harm to communities. The harm is realized as the video has already caused widespread condemnation and distress among netizens and religious figures. This meets the definition of an AI Incident because the AI system's use directly led to harm to communities through offensive content dissemination. There is no indication that this is merely a potential risk or a complementary update; the harm is actual and ongoing.
Thumbnail Image

Video neraka guna AI boleh jadi murtad - PBNU

2025-06-11
Sinar Harian
Why's our monitor labelling this an incident or hazard?
The videos are explicitly described as AI-generated content that has caused offense and condemnation from religious authorities. While the content is provocative and socially sensitive, the article does not report any direct or indirect harm such as injury, rights violations, or disruption caused by the AI system. The harm is primarily reputational and religious sentiment-based, which does not fall under the defined categories of AI Incident. The event highlights societal and governance responses to AI-generated content, fitting the definition of Complementary Information rather than an Incident or Hazard.