Jakarta Officials Sanctioned for Using AI-Generated Photos to Falsify Public Complaint Responses

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Jakarta public officials used AI-generated photos to falsely report the resolution of citizen complaints about illegal parking via the JAKI app. The incident led to disciplinary actions, public apologies, and an official investigation, highlighting the misuse of AI to deceive the public and undermine trust in government services.[AI generated]

Why's our monitor labelling this an incident or hazard?

An AI system was used to generate visual responses to citizen complaints, but the AI output did not reflect reality, causing misinformation and public distrust. This constitutes indirect harm to the community and a breach of obligations for transparent public service. Therefore, this qualifies as an AI Incident because the AI system's use directly led to harm (misinformation and public criticism). The article focuses on the incident and the response to it, not just on the response or broader AI governance, so it is not merely Complementary Information.[AI generated]
AI principles
Transparency & explainabilityAccountability

Industries
Government, security, and defence

Affected stakeholders
General publicGovernment

Harm types
ReputationalPublic interest

Severity
AI incident

Business function:
Citizen/customer service

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Lurah Kalisari Dipanggil Inspektorat DKI Buntut Aduan JAKI Dibalas AI

2026-04-06
CNNindonesia
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate visual responses to citizen complaints, but the AI output did not reflect reality, causing misinformation and public distrust. This constitutes indirect harm to the community and a breach of obligations for transparent public service. Therefore, this qualifies as an AI Incident because the AI system's use directly led to harm (misinformation and public criticism). The article focuses on the incident and the response to it, not just on the response or broader AI governance, so it is not merely Complementary Information.
Thumbnail Image

Inspektorat DKI Panggil Lurah Kalisari Buntut PPSU Pakai AI

2026-04-06
VIVA.co.id
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system to manipulate visual content in a public service context, which led to public criticism and concerns about misinformation. The AI's role in altering images that misrepresent reality has directly led to harm in terms of community trust and potential misinformation, which falls under harm to communities. The local government's response confirms the recognition of this harm. Therefore, this is an AI Incident because the AI system's use has directly led to harm, even if non-physical, and the event is not just a potential risk or a response update.
Thumbnail Image

Buntut PPSU Gunakan AI saat Tindak Lanjut Laporan di JAKI, Lurah Kalisari Jaktim Dipanggil Inspektorat

2026-04-06
JawaPos.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI by a PPSU officer in responding to reports, so an AI system is involved. However, the event centers on the controversy and administrative response rather than any realized harm or damage caused by the AI use. There is no evidence of injury, rights violations, or other harms as defined for an AI Incident. Nor is there a clear credible risk of harm that would qualify as an AI Hazard. Instead, the article focuses on governance and societal response to the AI use, which fits the definition of Complementary Information.
Thumbnail Image

Imbas Kasus PPSU Pakai AI Tanggapi Parkir Liar, Wali Kota Munjirin Sebut Lurah Kalisari Dinonaktifkan

2026-04-07
JawaPos.com
Why's our monitor labelling this an incident or hazard?
The article involves an AI system in the form of AI-generated images used in a public report, which led to administrative action against officials for integrity concerns. However, the event does not describe any realized harm or credible risk of harm resulting from the AI use. It is primarily about governance and disciplinary response to misuse of AI content in public reporting. Therefore, it does not meet the criteria for AI Incident or AI Hazard but fits as Complementary Information regarding societal and governance responses to AI misuse.
Thumbnail Image

Lurah Kalisari Jaktim dipanggil Inspektorat usai kasus PPSU gunakan AI

2026-04-06
ANTARA News - The Indonesian News Agency
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly used to generate visualizations that altered the real scene, leading to misinformation and public backlash. This constitutes an indirect violation of rights related to transparency and truthful information, harming community trust. The harm has already occurred as the manipulated images were publicly disseminated and criticized. Therefore, this qualifies as an AI Incident due to the realized harm caused by the AI system's use in public service reporting.
Thumbnail Image

Lurah Kalisari Diperiksa Inspektorat soal Kasus Manipulasi Laporan JAKI Pakai AI

2026-04-06
Media Indonesia - News & Views -
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI technology to alter visual reports, which is an AI system involvement. The AI's use in manipulating images directly led to misleading information being disseminated, which harms community trust and the quality of public service. This fits the definition of an AI Incident because the AI system's use has directly led to harm to communities (harm category d) through misinformation and manipulation of public reports. The ongoing investigation and administrative responses further confirm the seriousness of the incident.
Thumbnail Image

Diprotes Warga, Petugas PPSU Pakai Gambar AI untuk 'Hilangkan' Mobil Parkir Liar di Kalisari Jaktim - Wartakotalive.com

2026-04-06
Warta Kota
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to alter images, changing the appearance of officers and removing cars from photos. However, this use is for responding to public complaints and sharing on social media, with no reported harm or risk of harm. There is no indication that the AI system's use led to injury, rights violations, or other harms defined under AI Incident or AI Hazard. The event is primarily about the use of AI-generated images as a communication tool, which fits the category of Complementary Information as it provides context on AI use but does not describe an incident or hazard causing or plausibly causing harm.
Thumbnail Image

Petugas PPSU Pakai Gambar Olahan AI untuk Tanggapi Keluhan Warga, Lurah Kalisari Jaktim Diperiksa - Wartakotalive.com

2026-04-06
Warta Kota
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI-generated images by PPSU officers to respond to complaints, indicating AI system involvement. However, the event does not describe any direct or indirect harm such as physical injury, rights violations, or significant community harm caused by the AI use. The harm is reputational and social backlash, which does not meet the threshold for an AI Incident. There is also no indication of plausible future harm or risk that would qualify it as an AI Hazard. The main focus is on the official response and governance measures following the viral misuse of AI images, fitting the definition of Complementary Information.
Thumbnail Image

Viral Petugas PPSU Gunakan Gambar Olahan AI untuk Jawab Aduan Warga, Keberadaan Bengkel Jadi Sorotan

2026-04-06
Warta Kota
Why's our monitor labelling this an incident or hazard?
The article mentions the use of AI-generated images by a public officer to answer complaints, which involves an AI system. However, the event does not describe any realized harm or incident resulting from this use. Instead, it highlights societal and governance responses, including meetings and staff training to prevent future misuse. Therefore, this qualifies as Complementary Information, as it provides context and updates on governance and societal reactions to AI use rather than reporting an AI Incident or Hazard.
Thumbnail Image

Geram, Wali Kota Jakarta Timur Munjirin Minta Para Pegawai Tidak Main-main Menanggapi Laporan Warga - Wartakotalive.com

2026-04-06
Warta Kota
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate or alter images to respond to citizen complaints, resulting in misleading visuals that distort reality (e.g., vehicles disappearing, altered uniforms). This misuse of AI has caused harm by misleading the public and potentially breaching obligations for transparency and accountability in public service. The harm is realized and significant enough for the city official to intervene and prohibit such use. Therefore, this qualifies as an AI Incident due to indirect harm to community trust and violation of governance obligations.
Thumbnail Image

Gubernur Tegas soal Petugas PPSU Pakai AI Bohongi Warga yang Mengadu, Pemkot Sampai Rapat Terbatas

2026-04-06
Tribun Jatim
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly involved as the officer used AI-generated images to respond dishonestly to citizen complaints. This misuse of AI directly led to harm by deceiving citizens, undermining trust in public services, and causing social harm to the community. The event involves the use of AI in a way that breaches obligations to provide truthful information and proper service, which fits the definition of an AI Incident involving violation of rights and harm to communities. The subsequent government meeting and disciplinary actions are responses to this incident, not the main event itself.
Thumbnail Image

Laporan JAKI Dibalas Foto AI, Lurah Kalisari Diperiksa Inspektorat DKI

2026-04-06
detik News
Why's our monitor labelling this an incident or hazard?
An AI system is involved as AI-generated photos were used to respond to citizen reports, indicating AI use in public service communication. Although no direct physical harm or legal violation is explicitly reported, the use of AI-generated images in official responses can cause misinformation and harm community trust, which is a form of harm to communities. The event describes an ongoing issue with realized misuse of AI-generated content leading to administrative investigation and corrective actions, fitting the definition of an AI Incident due to indirect harm to community trust and potential misinformation.
Thumbnail Image

Lurah Kalisari Dinonaktifkan Buntut Kasus Laporan JAKI Dibalas Foto AI

2026-04-07
detik News
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated manipulated photos that led to administrative consequences, including the temporary suspension of a public official and disciplinary action against staff. This shows the AI system's use directly led to harm in the form of reputational damage and disruption of public service integrity. The harm is realized, not just potential, and the AI system's role is pivotal in causing the incident. Therefore, this event qualifies as an AI Incident under the framework, as it involves harm to community trust and governance caused by AI-generated manipulated content.
Thumbnail Image

Lurah Kalisari Jaktim dipanggil Inspektorat usai kasus PPSU gunakan AI - ANTARA News Kalimantan Barat

2026-04-06
Antara News
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly used to generate altered visual content that misrepresented the real situation on the ground. This use of AI led to misinformation and public backlash, which constitutes harm to communities and a breach of obligations related to transparency and truthful public service. Although no physical harm occurred, the incident involves indirect harm through manipulation and misinformation. Therefore, this qualifies as an AI Incident under the framework because the AI system's use directly led to a significant harm (misinformation and loss of trust) and violation of obligations.
Thumbnail Image

Viral Petugas PPSU Pakai AI saat Respons Laporan Warga soal Parkir Liar di Jakarta Timur, Banjir Kritikan Netizen

2026-04-06
tvonenews.com
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly used to generate visualizations in response to citizen reports, but the AI output misrepresented the actual situation, causing public distrust and criticism. Although no physical harm or direct violation of rights is reported, the manipulation of data and misleading visuals can be considered harm to community trust and informational integrity, which falls under harm to communities. The event involves the use of AI leading to realized harm (misinformation and public backlash). Therefore, this qualifies as an AI Incident.
Thumbnail Image

Pemprov DKI Akui Salah Gunakan AI untuk Respons Aduan

2026-04-05
IDN Times
Why's our monitor labelling this an incident or hazard?
An AI system was involved in generating images used in complaint responses, and the government admitted to an error in this AI use. This constitutes a misuse of AI in the system's use phase. However, the article does not report any realized harm such as injury, rights violations, or operational disruption. The issue is an acknowledged error without evidence of direct or indirect harm. Therefore, it does not meet the threshold for an AI Incident. It is more appropriately classified as Complementary Information because it provides an update on AI use and governance issues related to a prior or ongoing situation without reporting new harm or plausible future harm.
Thumbnail Image

Lapor Pengaduan Dibalas Foto AI, Pemprov DKI Tegur Kelurahan Kalisari

2026-04-05
CNNindonesia
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate manipulated photos that were presented as evidence in official complaint follow-ups, which constitutes misuse of AI in a public administration context. This misuse led to misinformation and undermined the integrity of public service processes, which can be considered harm to communities and a violation of obligations under applicable law related to transparency and accountability. The event describes realized harm due to AI misuse, qualifying it as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Pramono Minta Tindaklanjut Aduan Warga Pakai AI Tak Boleh Terulang: Transparansi Itu Penting

2026-04-06
VIVA.co.id
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly involved in generating fake photos used as falsified evidence in government complaint follow-ups. This misuse led to a breach of trust and transparency obligations by the government office, which is a violation of legal and ethical standards protecting public rights and governance. The harm is realized as it undermines public trust and the integrity of government services. The event includes official responses and sanctions, confirming the incident's seriousness. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Viral Kelurahan Kalisari Tindak Lanjut Aduan Pakai AI, Pemprov DKI Langsung Beri Teguran

2026-04-06
JawaPos.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to fabricate photographic evidence, which is a misuse of an AI system. This misuse has directly led to harm in the form of reputational damage to public officials and undermines the integrity of public complaint handling processes. Such harm falls under violations of obligations under applicable law intended to protect fundamental rights related to transparency and trust in public institutions. Therefore, this event qualifies as an AI Incident due to the direct harm caused by AI misuse.
Thumbnail Image

Pemprov DKI tegur Kelurahan Kalisari yang tindak lanjut aduan pakai AI

2026-04-06
ANTARA News - The Indonesian News Agency
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved as the manipulated photos used as evidence were generated by AI. The misuse of AI-generated images to falsify official documents directly leads to a violation of obligations under applicable law and ethical standards protecting public trust and rights, which fits the definition of an AI Incident under violations of human rights or breach of obligations. The harm is realized as it undermines the integrity of public service and misleads the public, which is a significant harm. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Tindak lanjut aduan warga pakai AI, Pram: Tak boleh terulang

2026-04-06
ANTARA News - The Indonesian News Agency
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly involved in generating falsified photographic evidence, which was used to mislead in the official complaint follow-up process. This constitutes a misuse of AI leading to a violation of rights and harm to the community's trust in public institutions. The harm has already occurred, and the event involves the use of AI in a way that caused this harm. Hence, this qualifies as an AI Incident.
Thumbnail Image

Kasus PPSU gunakan AI, Wali Kota Jaktim kumpulkan jajaran

2026-04-06
ANTARA News - The Indonesian News Agency
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to generate manipulated images as evidence in official public service responses, which misrepresents reality and misleads the public. This manipulation constitutes a violation of rights related to transparency and accountability, harming community trust. The harm has already occurred as evidenced by public outcry and official reprimands. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

DKI kemarin, PPSU gunakan AI hingga peringatan cuaca ekstrem

2026-04-07
ANTARA News - The Indonesian News Agency
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly involved in generating falsified photographic evidence, which was used to misrepresent the handling of citizen complaints. This constitutes a misuse of AI technology leading to harm in the form of reputational damage and undermining public trust, which can be considered harm to communities and a violation of obligations under applicable law protecting rights related to transparency and accountability. Therefore, this qualifies as an AI Incident due to the realized harm caused by the AI system's misuse.
Thumbnail Image

Respons Pemprov DKI Jakarta soal Viral Foto Tindak Lanjut Aduan JAKI Diduga Hasil Edit AI

2026-04-05
Liputan 6
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated manipulated photos used as falsified evidence in an official complaint process, which is a misuse of AI leading to harm in terms of public trust and administrative integrity. This fits the definition of an AI Incident because the AI system's use directly led to a violation of obligations under applicable law and harm to community trust. The government's actions to address and prevent such misuse further confirm the recognition of harm caused by AI misuse.
Thumbnail Image

Viral Bukti Pengaduan Masyarakat Gunakan AI, Pemprov DKI Jakarta Perkuat Validasi Pengaduan

2026-04-05
Warta Kota
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly involved in generating fabricated photo evidence used in official complaint follow-ups, which is a misuse of AI leading to falsification and harm to public trust and administrative integrity. The harm is realized as the AI-generated evidence was accepted and used, prompting corrective government responses. This fits the definition of an AI Incident because the AI system's misuse directly led to harm (violation of procedural integrity and potential breach of obligations in public administration). The article focuses on the incident and the response, not just potential or future harm, so it is not an AI Hazard or Complementary Information.
Thumbnail Image

Petugas Pemprov DKI Jakarta Diduga Kelabui Aduan Masyarakat di JAKI Soal Parkir Liar Pakai Foto AI

2026-04-06
Tribun Jakarta
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly used to generate manipulated photos that falsely showed the removal of illegally parked cars. This use of AI directly led to harm by misleading the public and obstructing the proper handling of complaints, which can be considered harm to communities and a violation of obligations under applicable law protecting public rights. Therefore, this qualifies as an AI Incident due to the realized harm caused by the AI system's misuse in falsifying evidence.
Thumbnail Image

Duh! Viral Laporan Warga soal Parkir Liar Dibalas Foto AI, Pemprov DKI Buka Suara : Okezone News

2026-04-05
https://news.okezone.com/
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly involved in generating fabricated photos used as evidence in official complaint responses, which constitutes misuse of AI leading to misinformation and potential harm to community trust and governance processes. This misuse has already occurred and caused harm by misleading citizens and undermining the complaint process, thus qualifying as an AI Incident under violations of rights and harm to communities. The government's response is corrective but does not negate the incident classification.
Thumbnail Image

Warga Lapor ke JAKI Direspons Foto AI, DKI Akan Tegur Kelurahan Kalisari

2026-04-05
detik News
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly involved in generating falsified photographic evidence, which was used in official complaint follow-ups. This misuse of AI directly led to harm by misleading the public and damaging the credibility of public institutions, constituting a violation of rights and harm to community trust. The event involves the use and misuse of AI, with realized harm, thus qualifying as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Viral Laporan Warga soal Parkir Liar Dibalas dengan Foto Editan AI, Pemprov DKI Jakarta Tegur Kelurahan Kalisari

2026-04-05
SINDOnews
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated edited photos used improperly in official responses, indicating AI system involvement and misuse. However, the harm is limited to administrative misconduct and potential misinformation without direct or indirect harm to health, rights, infrastructure, or property. The government's corrective actions and reprimand indicate a governance response to an AI-related issue rather than an incident causing harm or a hazard posing plausible future harm. Thus, the event fits the definition of Complementary Information, as it provides context on societal and governance responses to AI misuse without constituting a new AI Incident or AI Hazard.
Thumbnail Image

Warga Lapor ke JAKI lalu Direspons Foto AI, Pemprov DKI Akui Ada Kekeliruan

2026-04-05
KOMPAS.com
Why's our monitor labelling this an incident or hazard?
An AI system is involved as AI-generated or AI-manipulated photos were allegedly used as evidence in official complaint follow-ups. The event stems from the use (or misuse) of AI in producing manipulated evidence. Although no direct harm such as physical injury or legal rights violation is reported, the use of AI-manipulated evidence in public administration could plausibly lead to harm by undermining trust, causing misinformation, or administrative failures. The government is investigating and taking steps to prevent such misuse. Therefore, this event represents a plausible risk of harm due to AI misuse, fitting the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Pemprov DKI Perkuat Validasi Pengaduan, Larang Penggunaan AI untuk Bukti Tindak Lanjut |Republika Online

2026-04-05
Republika Online
Why's our monitor labelling this an incident or hazard?
The use of AI to generate falsified evidence in official complaint follow-ups directly led to harm by undermining the integrity and trustworthiness of public service processes, which is a violation of obligations under applicable law and harms community trust. The event involves the use and misuse of AI systems in a way that caused realized harm, not just potential harm. Therefore, it qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Warga Lapor Lewat JAKI Ditindaklanjuti Foto AI, Pemprov Jakarta Akui Keliru |Republika Online

2026-04-06
Republika Online
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI-generated photos in the follow-up to a public complaint, which is an AI system involvement. However, the event is about recognizing and correcting an error in the use of AI-generated content, with no reported direct or indirect harm such as injury, rights violations, or disruption. The authorities' response to strengthen verification and oversight is a governance action related to AI use. Since no harm has occurred and the main focus is on the response and correction of the AI misuse, this fits the definition of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

Aduan Parkir Liar Berujung Dugaan AI, Pemprov DKI Turun Tangan

2026-04-06
VOI - Waktunya Merevolusi Pemberitaan
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI to generate falsified photographic evidence in a public complaint system, which directly led to harm in the form of misinformation, erosion of public trust, and potential violation of legal obligations regarding truthful reporting and administrative integrity. The AI system's misuse is central to the incident, and the harm is realized, not just potential. The government's response and corrective measures further confirm the incident's significance. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Reaksi Tegas Pramono Setelah Anak Buah Balas Aduan Warga Pakai AI

2026-04-06
Jawa Pos National Network
Why's our monitor labelling this an incident or hazard?
An AI system (the JAKI application using AI) was involved in handling citizen complaints. The use of AI led to a response that was perceived as deceptive or manipulated, which constitutes a violation of trust and transparency towards the public. This can be considered a violation of rights or harm to community trust. The involvement of AI in producing misleading content that affects public trust and governance qualifies this as an AI Incident because harm (to community trust and transparency) has occurred due to the AI system's use.
Thumbnail Image

Pramono Instruksikan Inspektorat Periksa Lurah Kalisari Buntut Foto AI Jaki

2026-04-06
IDN Times
Why's our monitor labelling this an incident or hazard?
The article mentions AI involvement in editing a photo used in a complaint response, which implies AI system use. However, there is no indication that this use has directly or indirectly caused harm such as injury, rights violations, or community harm. The event is about an ongoing investigation and potential sanctions, which is a governance or societal response to possible misuse. Therefore, this fits the category of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

The Power of Viral, Parkir Liar di Kalisari Tiba-Tiba Bersih Tanpa AI

2026-04-07
CNNindonesia
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly involved in generating manipulated images (photo evidence) to falsely show that illegal parking had been resolved. This use of AI directly led to a harm related to community trust and governance, as it misrepresented facts and obstructed proper enforcement actions. The administrative sanction against the personnel involved confirms the harm was recognized and materialized. Therefore, this qualifies as an AI Incident due to the direct involvement of AI in causing harm to community interests and public order.
Thumbnail Image

Pramono Geram Laporan JAKI Dibalas Foto AI: Siapapun yang Salah, Hukum

2026-04-06
CNNindonesia
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI to create manipulated photos that falsely indicate the resolution of a public complaint. This manipulation is a misuse of AI technology that directly harms the community by misleading citizens and violating principles of transparency and accountability. The involvement of AI in producing deceptive content that leads to harm (misinformation and breach of trust) fits the definition of an AI Incident, as the AI system's use has directly led to harm to communities and a breach of obligations intended to protect rights related to truthful information and public service integrity.
Thumbnail Image

Pramono Minta Usut Dalang Manipulasi AI di JAKI, Yakin Bukan PPSU

2026-04-07
CNNindonesia
Why's our monitor labelling this an incident or hazard?
The article describes a case where AI was used to create manipulated images that misrepresent the resolution of a public complaint. This manipulation has already occurred and caused harm by misleading the public and potentially obstructing accountability. The involvement of AI in generating the manipulated photos is explicit, and the harm is realized, not just potential. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

PPSU yang Pakai AI Tangani Aduan Warga Dapat Sanksi SP1

2026-04-06
VIVA.co.id
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI to create manipulated images that misrepresented the factual situation in a public complaint system. This misuse of AI led to harm by undermining public trust and spreading false information about the state of illegal parking, which is a community harm and a violation of rights related to truthful public information. The disciplinary sanction (SP1) against the officer confirms the recognition of harm caused. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's misuse in public service reporting.
Thumbnail Image

Viral Foto AI di JAKI: Pramono Anung Ancam Sanksi Tegas Lurah Kalisari!

2026-04-06
JawaPos.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system to generate fake photographic evidence, which was then used to mislead citizens about the status of public service actions. This constitutes a direct harm to the community by spreading false information and violating principles of transparency and honesty in governance. The AI system's misuse is central to the incident, fulfilling the criteria for an AI Incident as it has directly led to harm (misinformation and breach of trust).
Thumbnail Image

Pramono Periksa Lurah Imbas Foto Penanganan Parkir Liar di Aplikasi JAKI Diduga Hasil AI

2026-04-06
Pikiran-Rakyat.com
Why's our monitor labelling this an incident or hazard?
An AI system was used to create or manipulate photos presented as evidence in a government application, which misled the public and officials. This constitutes a violation of rights and harm to the community by spreading false information and undermining transparency. The event describes realized harm caused by the AI system's use, not just a potential risk. Therefore, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Lurah Kalisari dinonaktifkan usai kasus PPSU pakai AI soal parkir liar

2026-04-07
ANTARA News - The Indonesian News Agency
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI to alter images in a public report, which directly led to harm in the form of misinformation and loss of public trust. The misuse of AI in this context caused a breach of obligations related to accurate public reporting and transparency, which are fundamental to human rights and governance. Therefore, this qualifies as an AI Incident due to the realized harm caused by the AI system's use in manipulating official reports.
Thumbnail Image

Petugas PPSU di Jaktim kena SP 1 usai unggah foto AI soal parkir liar

2026-04-06
ANTARA News - The Indonesian News Agency
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system to generate a misleading image that was uploaded as evidence in a public complaint system. This misuse directly led to harm in the form of eroded public trust and a violation of obligations for truthful reporting by public officials. The disciplinary sanction and policy response confirm the harm was materialized and recognized. The AI system's role was pivotal in creating the false impression, fulfilling the criteria for an AI Incident rather than a hazard or complementary information. The harm is not speculative or potential but has already occurred, thus excluding classification as an AI Hazard or Complementary Information.
Thumbnail Image

Pramono Beri Hukuman untuk Anak Buahnya yang Tindaklanjuti Aduan Warga dengan AI

2026-04-06
Jawa Pos National Network
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly mentioned as being used to handle citizen complaints. However, the article does not report any realized harm such as injury, rights violations, or disruption caused by the AI system. Instead, it highlights concerns about transparency and the inappropriate use of AI, with the government taking corrective action. This fits the definition of Complementary Information, as it provides context on societal and governance responses to AI use, rather than describing an AI Incident or AI Hazard.
Thumbnail Image

Kendaraan yang parkir liar di Pasar Rebo sudah dipindahkan usai viral

2026-04-07
ANTARA News - The Indonesian News Agency
Why's our monitor labelling this an incident or hazard?
The event centers on the use of AI to generate altered images for reporting purposes, which caused public concern about data manipulation. While the AI system's outputs influenced public perception and led to administrative action, there is no evidence of actual harm such as injury, rights violations, or disruption caused by the AI system itself. The article primarily describes a governance and societal response to the AI use rather than an AI incident causing harm or a hazard posing plausible future harm. Therefore, this is best classified as Complementary Information, as it provides context on societal and governance responses to AI use in public service reporting.
Thumbnail Image

Pramono minta jajaran cari pelaku pembuat foto AI di JAKI

2026-04-07
ANTARA News - The Indonesian News Agency
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved as the photos in question are AI-generated. The misuse of these AI-generated photos in an official report has led to administrative and disciplinary consequences, indicating harm to the integrity of public service operations and potentially to community trust. Although no physical harm or direct legal violation is explicitly stated, the misuse of AI-generated content in public reporting constitutes a violation of expected standards and could be considered harm to community trust and public service integrity. Therefore, this qualifies as an AI Incident due to the realized harm stemming from the use of AI-generated content in a public service context.
Thumbnail Image

Pramono Geram Aduan Warga di JAKI Direspons Pakai AI: Siapapun yang Salah Harus Dihukum

2026-04-06
Liputan 6
Why's our monitor labelling this an incident or hazard?
An AI system is involved as the manipulated images are generated using AI technology. The misuse of AI in this context has directly led to harm in the form of violation of citizens' rights to accurate and truthful responses to their complaints, which falls under violations of human rights or breach of obligations under applicable law. The event describes actual harm occurring, not just potential harm, and involves the use of AI in a way that undermines public trust and governance. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Petugas PPSU Kalisari Kena SP1 Usai Unggah Foto AI Respons Aduan Warga soal Parkir Liar

2026-04-06
Liputan 6
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly used to generate a misleading photo that was shared publicly, causing misinformation and public concern. The use of AI in this context directly led to reputational harm and misinformation affecting the community's trust. Although no physical injury or legal violation is reported, the harm to community trust and the spread of false information is a significant harm under the framework. The event is not merely a potential risk but a realized incident involving AI misuse, thus classifying it as an AI Incident.
Thumbnail Image

Warganet Kesal, Adukan Parkir Liar di Aplikasi JAKI Malah Dibalas AI

2026-04-06
Jawa Pos National Network
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved as the photo used in response to the complaint was manipulated using AI. The misuse of AI-generated images in official communication led to harm in the form of misinformation and erosion of public trust, which is a harm to communities and governance. The event stems from the use and malfunction/misuse of AI in the complaint handling process. Although no physical injury or legal rights violation is reported, the harm to community trust and the integrity of public service responses is significant and directly linked to AI misuse. Hence, it meets the criteria for an AI Incident.
Thumbnail Image

Lurah Kalisari Diduga Manipulasi Aduan JAKI, Pramono Yang Salah Harus Dihukum

2026-04-06
Media Indonesia - News & Views -
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to manipulate photos used as evidence in a government complaint system (JAKI). This manipulation misleads the public and government authorities, constituting a breach of transparency and trust, which can be considered a violation of obligations under applicable law protecting public rights and transparency. The AI system's misuse directly led to this harm. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information, as the harm has already occurred and is under investigation with calls for sanctions.
Thumbnail Image

Laporan Parkir Pakai AI, Pramono Anung Beri Sanksi Lurah

2026-04-06
beritasatu.com
Why's our monitor labelling this an incident or hazard?
The event describes the use of AI to manipulate photos in handling a citizen's report, which is a misuse of AI technology. While this misuse has caused public criticism and administrative consequences, it does not meet the threshold for an AI Incident because no direct or indirect harm (such as physical injury, rights violations, or significant community harm) has occurred. It also does not qualify as an AI Hazard since the harm is not potential but rather a misuse already identified without resulting in the defined harms. The main focus is on the administrative response and investigation, making this best classified as Complementary Information about governance and oversight related to AI misuse in public service.
Thumbnail Image

Laporan Warga di JAKI Dibalas Pakai Foto AI, DPRD DKI Jakarta: Ini Bentuk Pengkhianatan - Tribunjakarta.com

2026-04-06
Tribun Jakarta
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly used to generate fake photos as part of a false report in response to citizen complaints. This misuse of AI directly led to harm by deceiving the public and undermining trust in public services, which falls under violations of rights and harm to communities. Therefore, this event qualifies as an AI Incident due to the realized harm caused by the AI-generated fabricated content used in official reporting.
Thumbnail Image

Teguran Keras Bagi Petugas Lapangan yang Palsukan Laporan Tindak Lanjut JAKI Pakai AI

2026-04-06
Tribun Jakarta
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI to generate falsified images that misrepresent the status of public complaints, directly leading to harm by deceiving the public and undermining trust in government services. The AI system's misuse in producing fake evidence is a direct cause of the harm. This fits the definition of an AI Incident because the AI system's use has directly led to a violation of rights and harm to the community's trust and service quality.
Thumbnail Image

Petugas PPSU Kalisari Kena SP1 Usai Unggah Foto AI di JAKI, Lurah Minta Maaf

2026-04-07
Warta Kota
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system to generate a manipulated photo that misled the public about the status of illegal parking. This misinformation can be considered harm to the community by spreading false information and undermining public trust in official responses. The disciplinary action and public apology confirm that the AI-generated content caused a significant negative impact. Hence, this qualifies as an AI Incident due to realized harm caused by the AI system's use.
Thumbnail Image

Harta Melonjak Nyaris 4 Kali Lipat, Ini Sosok Lurah Siti Nurhasanah Imbas Tipu Warga Pakai AI

2026-04-06
Tribunnews Bogor
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to manipulate a photo used as evidence in a public complaint system (JAKI). The AI system's output was used to mislead citizens about the handling of a parking violation, which is a direct misuse of AI leading to harm by deceiving the public and potentially violating legal or administrative duties. This meets the criteria for an AI Incident because the AI system's use directly caused harm through misinformation and breach of trust in public administration.
Thumbnail Image

Bukan Dinas Perhubungan, Lurah Kalisari Ungkap Alasan Petugas PPSU yang Tindak Lanjuti Parkir Liar - Tribunjakarta.com

2026-04-07
Tribun Jakarta
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to manipulate evidence in handling illegal parking reports, which is a misuse of AI in public administration. This misuse has led to harm in the form of misinformation and a breach of public trust, which falls under violations of rights and obligations under applicable law. Although the investigation is ongoing and the full consequences are not yet clear, the harm is already realized as public complaints were not properly addressed and evidence was manipulated. Hence, this event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Polemik Foto JAKI Diduga Pakai AI, Pramono Tegas Menyentil: Lebih Baik Belum daripada Bohong - Tribunjakarta.com

2026-04-06
Tribun Jakarta
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI to generate manipulated photos that were used to falsify official reports, which is a misuse of AI technology leading to harm in the form of misinformation and breach of public trust. The harm is realized, not just potential, as the manipulated AI photos were used in official reports to mislead citizens. This fits the definition of an AI Incident because the AI system's use directly led to harm to communities and violation of obligations related to transparency and integrity in public service.
Thumbnail Image

Perkara Foto JAKI Pakai AI Bikin Pramono Murka: Nasib Lurah dan Petugas PPSU, Ada yang Kena SP 1

2026-04-07
Tribun Jakarta
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly involved in manipulating reports in a government application, which led to harm in the form of undermined transparency and governance integrity. The misuse of AI caused real consequences, including investigations and sanctions against officials and staff. This meets the criteria for an AI Incident because the AI's use directly led to harm related to violations of obligations intended to protect fundamental rights such as transparency and accountability in public administration.
Thumbnail Image

Bikin Heboh! Foto Aduan di JAKI Diduga Direkayasa AI, Petugas PPSU Kalisari Disanksi SP1

2026-04-06
Tribun Jakarta
Why's our monitor labelling this an incident or hazard?
An AI system was used to create or manipulate a photo that was presented as genuine evidence in a public complaint process. This use of AI directly led to harm in the form of misinformation, undermining public trust and accountability, and resulted in disciplinary sanctions against the involved officer. The event meets the criteria for an AI Incident because the AI system's use directly caused a violation of obligations under applicable law and harm to community trust and governance. The harm is realized, not just potential, and the AI system's role is pivotal in the incident.
Thumbnail Image

Lurah Kalisari Minta Maaf Terkait Foto AI Tindak Lanjut Aduan, Petugas Disanksi

2026-04-06
Tribun Jakarta
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly used to manipulate photographic evidence in a public complaint process, leading to misinformation and harm to community trust. This constitutes a direct harm caused by the use of AI, fulfilling the criteria for an AI Incident. The event is not merely a potential risk or a general update but a realized harm involving AI misuse. Therefore, it is classified as an AI Incident.
Thumbnail Image

Mengintip Isi Garasi dan Tanah Milik Siti Nurhasanah, Lurah Kalisari Minta Maaf Soal Skandal Foto AI

2026-04-06
Tribun Sumsel
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly used to create manipulated photos (AI-generated content) that were presented as evidence in a public complaint process. This misuse of AI led to misinformation and failure to address a community issue, which harms the community's right to truthful information and effective governance. Therefore, this event qualifies as an AI Incident due to the realized harm caused by the AI system's misuse in public administration and community relations.
Thumbnail Image

Hukuman Pemprov DKI untuk Petugas Lapangan yang Balas Aduan Warga soal Parkir Liar Pakai Foto AI

2026-04-06
Tribun Jatim
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated photos used by government field officers to falsify responses to citizen complaints, which is a misuse of AI technology. This misuse has directly led to harm by eroding public trust in a government service platform and violating citizens' rights to truthful information and proper public service. The harm is realized and significant, meeting the criteria for an AI Incident rather than a hazard or complementary information. The involvement of AI in generating fake evidence is central to the incident.
Thumbnail Image

Harta Kekayaan dan Isi Garasi Lurah Kalisari Siti Nurhasanah, Diperiksa Inspektorat Imbas Foto AI

2026-04-07
Tribun Jatim
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI to manipulate photos in a public complaint system (JAKI), leading to misleading information about the resolution of a community issue. This manipulation constitutes a violation of trust and possibly legal obligations, harming the community's right to accurate information and effective governance. The AI system's use directly caused this harm, meeting the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Sentil Kerja Pegawai setelah Viral Petugas PPSU, Pramono Anung: Stop Akali Pengaduan Warga Pakai AI - Wartakotalive.com

2026-04-06
Warta Kota
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to manipulate complaint handling reports, which is a misuse of an AI system in a public service context. This manipulation misleads citizens and breaches principles of transparency, constituting harm to communities and a violation of rights. The government response to investigate and sanction those responsible confirms the harm has occurred. Hence, this event meets the criteria for an AI Incident due to the direct harm caused by AI misuse.
Thumbnail Image

Respon Pemprov DKI Soal Warga Lapor Parkir Liar di JAKI Direspon Foto AI, Sebut Nama Baik Tercoreng

2026-04-05
Tribun Jakarta
Why's our monitor labelling this an incident or hazard?
The presence of AI is reasonably inferred from the mention of AI-generated or AI-manipulated evidence in the response to a citizen report. The event involves the use (or misuse) of AI in public service communication. Although the AI-generated evidence is suspected to be invalid or fabricated, the article does not report any realized harm such as injury, rights violations, or significant community harm. The government is investigating and planning corrective measures, indicating the issue is recognized but harm is not confirmed. Therefore, the event represents a plausible risk of harm (e.g., erosion of trust, misinformation) but no confirmed harm yet, fitting the definition of an AI Hazard.
Thumbnail Image

Petugas PPSU Pakai AI Bohongi Warga yang Mengadu Akhirnya Disanksi, Lurah Minta Maaf

2026-04-06
Tribun Jatim
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly used to generate a manipulated photo falsely showing that illegal parking had been addressed, which misled citizens who filed complaints. This constitutes a misuse of AI in the handling of public service complaints, leading to a violation of trust and potentially undermining the right of citizens to accurate information and effective public service. The harm is realized as the AI-generated content directly caused misinformation and deception. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's use.
Thumbnail Image

Disorot, Lurah Kalisari Siti Nurhasanah SP1 Petugas Upload Foto Rekayasa AI di Aplikasi JAKI

2026-04-07
Bangka Pos
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system used to create a manipulated photo, which was then uploaded in an official capacity. This constitutes AI system involvement in the use phase. However, the harm is limited to reputational damage and procedural misconduct rather than physical injury, rights violations, or other significant harms outlined in the AI Incident definition. The event does not describe any direct or indirect harm to persons, property, or rights, nor does it indicate plausible future harm beyond reputational and procedural issues. Therefore, it does not meet the threshold for an AI Incident or AI Hazard. Instead, it provides complementary information about the misuse of AI in a public service context and the governance response (investigation and sanction).
Thumbnail Image

Terungkap Fakta Baru di Kalisari, Kasus Foto AI di JAKI Berujung Mediasi dan Kesepakatan Warga

2026-04-06
Tribun Jakarta
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of an AI-generated photo that was used to create a false report in a public application, leading to community disruption and official sanctions. The AI system's involvement in generating misleading content caused harm to the community and local governance processes, which qualifies as harm to communities under the AI Incident definition. The sanctions and mediation are responses to this harm, not the primary focus of the article, so this is not merely Complementary Information. Hence, this is classified as an AI Incident.
Thumbnail Image

Pramono Murka Laporan JAKI Diduga Dimanipulasi AI, Perintahkan Inspektorat Turun Tangan

2026-04-06
Warta Kota
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI technology being used to manipulate photo evidence in a government service app, which is a clear misuse of AI. This manipulation has already occurred and caused harm by misleading the public and government officials about the status of public complaints. The harm is not physical but relates to trust, transparency, and integrity in public services, which falls under harm to communities and breach of obligations under applicable law. Since the AI system's misuse has directly led to these harms, the event is classified as an AI Incident.
Thumbnail Image

Petugas PPSU Jawab Pengaduan Warga Pakai AI, Pramono: Lebih Baik Belum Selesai daripada Membohongi - Wartakotalive.com

2026-04-06
Warta Kota
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved as AI-generated or AI-edited images were used to manipulate reports. The misuse of AI in this context directly leads to harm by deceiving the public and violating principles of transparency and accountability in government service, which can be considered a breach of obligations under applicable law protecting fundamental rights to truthful information and good governance. Therefore, this event qualifies as an AI Incident due to the realized harm caused by the AI system's misuse.
Thumbnail Image

Skandal Rekayasa AI Laporan di Jakarta Timur: Lurah Kalisari Resmi Dinonaktifkan dari Jabatan - Wartakotalive.com

2026-04-07
Warta Kota
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI was used to fabricate photos submitted as official evidence in a public complaint process. This manipulation misled authorities and the public, causing reputational harm and undermining trust in public institutions. The AI system's use in this context directly led to a violation of obligations related to data integrity and public service transparency, which falls under violations of rights and harm to communities. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Tepi Jalan di Kalisari Pasar Rebo Jakarta Timur Kini Steril setelah Viral Laporan di Aplikasi JAKI - Wartakotalive.com

2026-04-06
Warta Kota
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (the JAKI app allegedly using AI) in the reporting and follow-up process. However, there is no indication that the AI system caused any harm or malfunctioned. The mention of AI manipulation of evidence is a suspicion but not confirmed harm or incident. The event focuses on the community's use of the app and the resulting clearing of illegal parking, which is a positive outcome. Since no harm or plausible future harm is described, and the AI's role is supportive and informational, this fits the definition of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

Pramono Minta Inspektorat Dalami Pengunggah Konten AI Parkir Liar di Kalisari : Okezone News

2026-04-07
https://news.okezone.com/
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate manipulated images in an official context, which is a misuse of AI technology. However, the article does not report any direct or indirect harm resulting from this misuse, such as physical injury, legal rights violations, or significant community harm. The ongoing investigation and apology indicate recognition of the issue but do not confirm harm has occurred. Thus, this situation is best classified as Complementary Information, as it provides context and updates on an AI-related misuse without confirming an AI Incident or AI Hazard.
Thumbnail Image

Kasus Foto AI di JAKI, Pramono: Siapa Salah Harus Dihukum! : Okezone News

2026-04-06
https://news.okezone.com/
Why's our monitor labelling this an incident or hazard?
An AI system is involved as AI-generated photos were used in official responses, indicating AI use in public service. The event stems from the use (or misuse) of AI-generated content. While there is a concern about manipulation and transparency, the article does not report any realized harm or violation of rights, only the potential for such harm if manipulation is confirmed. Therefore, this situation represents a plausible risk of harm due to AI misuse but no confirmed incident yet. Hence, it qualifies as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Viral Lapor Parkir Liar Pasar Rebo ke JAKI, Diduga Dibalas Foto Hasil AI

2026-04-05
detiki net
Why's our monitor labelling this an incident or hazard?
The presence of AI is inferred from the mention of AI-generated (edited) photos used in the response to a citizen's report. The event involves the use of AI in the handling of a public complaint. However, the article does not describe any realized harm such as injury, rights violations, or operational disruption. The concern is about the plausibility that AI-generated false evidence could mislead or harm trust in public services, which is a potential future harm. Thus, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Laporan JAKI soal Parkir Liar Dibalas Foto AI Berbuntut Panjang

2026-04-06
detik News
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly involved in generating a manipulated photo used to respond to a citizen report. This use of AI led to misinformation and a breach of trust between the public and authorities, which can be considered a violation of rights or harm to community trust. The incident involves the use (and misuse) of AI-generated content causing harm indirectly by misleading citizens and undermining public service accountability. Therefore, this qualifies as an AI Incident.
Thumbnail Image

Petugas PPSU Dijatuhi SP1 Usai Unggah Foto Penertiban Parkir Liar Hasil AI

2026-04-06
detik News
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI to create a fake photo, which was then used to mislead the public about enforcement actions. This constitutes a misuse of an AI system leading to harm in the form of misinformation and breach of public trust. Although no physical harm or legal violation is detailed, the dissemination of false information by a public official using AI-generated content is a clear harm to the community and public administration integrity. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's misuse.
Thumbnail Image

PSI: Balas Laporan JAKI Pakai Foto AI Rusak Kepercayaan Publik

2026-04-07
detik News
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly involved in generating a photo used to respond to a public complaint. The use of this AI-generated photo was misleading and falsely suggested that the problem was resolved, which damaged public trust in the government. This constitutes harm to communities and a violation of public service integrity, fulfilling the criteria for an AI Incident. The event describes realized harm caused by the AI system's use, not just a potential risk, so it is not an AI Hazard or Complementary Information.
Thumbnail Image

Lurah di Jaktim Minta Maaf Laporan Parkir Liar via JAKI Bibalas Foto AI

2026-04-06
detik News
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated content (a photo manipulated by AI) used in a misleading way by a public official's staff. However, the harm described is reputational and informational, with no direct or indirect physical harm, rights violations, or critical infrastructure disruption reported. The official's apology and disciplinary action indicate a response to the misuse. Since the article focuses on the response and learning from the incident rather than the incident causing significant harm, it fits the definition of Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

Langkah Verifikasi JAKI Diperbarui Usai Heboh Laporan Dibalas Pakai Foto AI

2026-04-07
detik News
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved as the photo used to respond to a public report was generated by artificial intelligence. The misuse of this AI-generated photo in an official context has directly led to reputational harm and undermined trust in public service, which can be considered harm to communities and a violation of expected service standards. The disciplinary sanction and updated verification processes indicate recognition of this harm. Therefore, this qualifies as an AI Incident because the AI system's use directly led to harm (damage to reputation and trust) and prompted official response measures.
Thumbnail Image

Mobil Parkir Liar di Jaktim Sudah Dipindahkan Usai Viral Diedit Foto AI

2026-04-07
detik News
Why's our monitor labelling this an incident or hazard?
The AI system was used in the development or use phase (photo editing) but did not cause any harm or plausible future harm. The event centers on the public and official response to the AI use, including sanctions against the officer, rather than harm caused by the AI itself. The illegal parking was resolved independently of the AI system's involvement. Hence, the main focus is on governance and societal response to AI use, fitting the definition of Complementary Information.
Thumbnail Image

Video: Laporan Warga di JAKI Dibalas Foto AI, Pramono Minta Lurah Diperiksa

2026-04-06
20DETIK
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate a manipulated photo as a response to a citizen's report, which is a misuse of AI leading to misinformation and a breach of transparency by public officials. This constitutes harm to community trust and governance, fitting the definition of an AI Incident. The governor's call for investigation and sanctions indicates recognition of harm caused by the AI-generated content. The event involves the use of AI and its misuse leading to harm, not just a potential risk or complementary information.
Thumbnail Image

Pram Minta Cari Pembuat-Pengunggah Foto AI ke JAKI: Tak Bisa Salahkan PPSU

2026-04-07
detik News
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated manipulated photos used in a public application, which led to misinformation and public backlash. The AI system's outputs (manipulated images) were used in a way that caused harm to the community's trust and the integrity of public reporting. Although no physical harm or legal rights violations are mentioned, the misinformation and reputational harm fall under harm to communities or other significant harms caused by AI. Therefore, this is an AI Incident because the AI system's use directly led to harm through misinformation dissemination.
Thumbnail Image

Pramono Minta Inspektorat Periksa Lurah Buntut Laporan Warga Dibalas Foto AI

2026-04-06
detik News
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI to generate a fake photo as a response to a citizen's report, which is a misuse of AI leading to harm in the form of deception and violation of public trust. This fits the definition of an AI Incident because the AI system's use has directly led to harm related to violations of obligations intended to protect fundamental rights such as transparency and honesty in public administration. The governor's response and call for investigation further confirm the seriousness of the incident.
Thumbnail Image

Lurah Kalisari Beri SP1 ke Petugas PPSU yang Unggah Foto Hasil AI Penanganan Parkir Liar

2026-04-06
SINDOnews
Why's our monitor labelling this an incident or hazard?
The AI system was involved in generating a photo used in response to a complaint, but the event centers on the sanctioning of the worker for inappropriate use of AI-generated content rather than any harm caused by the AI system itself. There is no evidence of injury, rights violation, or other harms as defined. Therefore, this is not an AI Incident or AI Hazard. It is best classified as Complementary Information because it provides context on governance and response to AI misuse in public service, enhancing understanding of AI's societal implications without describing a new harm or risk.
Thumbnail Image

Pramono Minta Inspektorat Dalami Pengunggah Konten AI Parkir Liar di Kalisari: Jangan Hanya Bisa Menyalahkan PPSU

2026-04-07
SINDOnews
Why's our monitor labelling this an incident or hazard?
The article describes the use of AI-generated photos in response to citizen reports, which has raised concerns and led to an official investigation. While AI is involved, there is no evidence of actual harm or a credible risk of harm resulting from this use. The event is primarily about the investigation and public reaction to the use of AI-generated images, making it complementary information rather than an incident or hazard.
Thumbnail Image

Lurah Kalisari dan Kasubdin Bakal Diperiksa Kasus Pengaduan Parkir Liar Dibalas dengan Foto Editan AI

2026-04-06
SINDOnews
Why's our monitor labelling this an incident or hazard?
The article describes a case where AI was used to edit a photo that was sent in response to a citizen's complaint. While this raises concerns about misuse of AI-generated content and possible ethical or procedural violations, there is no evidence of actual harm occurring yet. The focus is on the investigation and potential disciplinary action, which is a governance and societal response to an AI-related issue. Therefore, this fits the definition of Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

Wali Kota Jaktim Kumpulkan OPD: Laporan JAKI Harus Nyata, Bukan Rekayasa AI

2026-04-07
KOMPAS.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to manipulate or fabricate evidence in a public complaint follow-up, which directly led to harm in the form of misinformation and breach of public trust. The AI system's misuse is central to the incident, and disciplinary action has been taken, confirming the harm has materialized. This fits the definition of an AI Incident as the AI system's use directly led to a violation of obligations and harm to the community's trust in public services.
Thumbnail Image

Pramono Ogah Salahkan PPSU di Kasus Laporan JAKI Direspons Foto AI: Pada Saatnya Ketahuan

2026-04-07
KOMPAS.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated manipulated photos used in a government application, which constitutes misuse of AI technology. The manipulation has already occurred and caused harm by misleading the public and undermining trust in public services. The involvement of AI in creating manipulated content that was uploaded and sanctioned confirms direct harm. The ongoing investigation and sanctions further support that this is a realized harm event, not just a potential risk. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

PPSU Sudah Disanksi, Pramono Masih Cari Tahu Pengunggah Foto Rekayasa AI di JAKI

2026-04-07
KOMPAS.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated manipulated photos used in a public reporting app, which led to disciplinary sanctions and an official investigation. The AI system's role in creating false or misleading content that was uploaded and disseminated directly caused harm by misleading public reports and potentially affecting public trust and administrative processes. This fits the definition of an AI Incident because the AI system's use directly led to harm to communities and a breach of obligations under applicable law. The event is not merely a potential risk or a complementary update but a realized harm involving AI misuse.
Thumbnail Image

Ketulahnya PPSU Balas Laporan JAKI Pakai Foto AI soal Parkir Liar...

2026-04-07
KOMPAS.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI to create manipulated photos that were presented as genuine evidence in a public complaint system. This use of AI directly led to harm by misleading the public and breaching transparency obligations, which is a violation of rights and harms community trust. The involvement of AI in the manipulation and the resulting official sanctions confirm the direct link between AI use and harm. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Alasan PPSU Tangani Laporan JAKI soal Parkir Liar di Kalisari, Bukan Dishub

2026-04-07
KOMPAS.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to fabricate or manipulate photos as evidence in response to public reports, which misleads the community and obstructs proper handling of the issue. This constitutes a violation of rights and harm to the community. The AI system's use in this context directly led to the harm, fulfilling the criteria for an AI Incident rather than a hazard or complementary information. The event is not merely a potential risk or a response update but a realized harm caused by AI misuse.
Thumbnail Image

Saat Pramono Menuntut Kejujuran Laporan JAKI...

2026-04-07
KOMPAS.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to create manipulated photos as false evidence in official reports, which misleads citizens and undermines government transparency. The AI system's outputs are used to cover up failures in service delivery, constituting a violation of rights and trust. This is a direct harm caused by the AI system's misuse in the reporting process, meeting the criteria for an AI Incident under violations of rights and harm to communities. The involvement is in the use of AI to generate deceptive content, leading to realized harm.
Thumbnail Image

Skandal Laporan JAKI Direspons Foto AI Berujung SP1 untuk Petugas PPSU

2026-04-06
KOMPAS.com
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly involved in manipulating photographic evidence (AI-generated or AI-edited photos) used in official responses to citizen reports. This manipulation directly caused harm by deceiving the public and violating principles of transparency and accountability, which are fundamental rights and obligations under applicable law. The event includes the use and misuse of AI in a way that led to realized harm, not just potential harm. The disciplinary actions and official responses confirm the recognition of harm caused. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Usai Viral Foto Rekayasa AI di JAKI, Parkir Liar di Kalisari Sudah Steril

2026-04-06
KOMPAS.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to manipulate or fabricate evidence in response to a public complaint, which directly led to harm in the form of misinformation and breach of trust. This misuse of AI in a public governance context constitutes a violation of rights and obligations, meeting the criteria for an AI Incident. The event involves the use and misuse of an AI system (image manipulation) that caused harm to the community's trust and the integrity of public processes.
Thumbnail Image

PPSU Rekayasa Laporan JAKI Pakai AI, Wali Kota Jaktim: Jangan Main-main...

2026-04-06
KOMPAS.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI use in fabricating or manipulating evidence related to public service responses, which misled citizens and caused harm to community trust and transparency. The AI system's misuse directly led to a violation of rights and harm to the community, meeting the criteria for an AI Incident. The official response and sanctions confirm the harm has materialized rather than being a potential risk, ruling out AI Hazard or Complementary Information classifications.
Thumbnail Image

Foto Penanganan Parkir Liar Diduga Hasil AI, Pramono Minta Lurah Diperiksa

2026-04-06
KOMPAS.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions that AI was used to create manipulated photographic evidence in response to citizen reports about illegal parking. This manipulation misleads citizens and damages trust in government transparency, which is a violation of obligations intended to protect fundamental rights and governance principles. The AI system's misuse directly caused harm by falsifying official reports, meeting the criteria for an AI Incident. The involvement is in the use and misuse of AI, leading to realized harm rather than a potential risk or mere complementary information.
Thumbnail Image

Pramono Sentil Pejabat Pemprov DKI soal Laporan JAKI: Jangan Bohongi Warga dengan AI

2026-04-06
KOMPAS.com
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI to manipulate reports and produce fake photographic evidence in response to public complaints. This manipulation has already occurred and has harmed the community by deceiving them and obstructing proper resolution of their complaints. The AI system's role in fabricating evidence is central to the incident, fulfilling the criteria for an AI Incident due to violation of rights (transparency and truthful governance) and harm to the community through misinformation and loss of trust. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Petugas PPSU Diberi SP1 Usai Unggah Foto Rekayasa AI di JAKI, Diminta Tak Mengulangi

2026-04-06
KOMPAS.com
Why's our monitor labelling this an incident or hazard?
An AI system was used to create a manipulated photo that was presented as evidence of action taken on a public complaint. This use of AI directly contributed to misleading the public and caused reputational harm and potential erosion of trust in public institutions. The event involves the use and misuse of AI-generated content leading to a violation of obligations under applicable law and harm to the community's trust. The formal sanction and apology confirm the harm has materialized. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Lurah Kalisari Minta Maaf soal Foto Rekayasa AI di JAKI, Petugas PPSU Diberi SP1

2026-04-06
KOMPAS.com
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly involved in creating a manipulated photo used as false evidence in official complaint handling. This misuse of AI directly led to harm by misleading the public and breaching trust, which qualifies as harm to communities and a violation of obligations under applicable law protecting rights. The event is not merely a potential risk but an actual incident with consequences and sanctions, thus classifying it as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Dugaan AI dalam Aduan Parkir Liar, Tantangan Baru Pemprov DKI

2026-04-06
KOMPAS.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to create or edit photos that were presented as evidence in response to public complaints, which is a misuse of AI leading to misinformation and undermining public trust. This misuse has already occurred and caused harm by misleading complainants and complicating public administration. The involvement of AI in producing false evidence directly relates to harm under the framework, specifically harm to communities and violation of obligations under applicable law regarding transparency and accountability. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Saat Laporan Warga DKI Malah Dibalas Foto AI: "Jangan Asal Bapak Senang"

2026-04-06
KOMPAS.com
Why's our monitor labelling this an incident or hazard?
An AI system was used to create manipulated photos as false evidence in official responses to citizen complaints. This use of AI directly caused harm by misleading the public and violating principles of transparency and accountability in public service, which falls under violations of rights and breach of obligations. Therefore, this event qualifies as an AI Incident due to the realized harm caused by AI misuse in public administration.
Thumbnail Image

Laporan ke JAKI Direspons Foto AI, Pramono: Siapapun yang Salah Harus Dihukum

2026-04-06
KOMPAS.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to manipulate photos as part of falsified responses to citizen reports, which is a misuse of AI technology. This manipulation has caused harm by violating principles of transparency and trust in government, which falls under violations of rights and harm to communities. The involvement of AI in the manipulation is direct and has led to realized harm, not just potential harm. Therefore, this qualifies as an AI Incident.
Thumbnail Image

Camat Pasar Rebo Kumpulkan PPSU dan Lurah Kalisari Usai Laporan JAKI Direspons AI

2026-04-06
KOMPAS.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated manipulated images used as false evidence in official complaint handling, which constitutes a misuse of AI leading to harm in terms of misinformation and violation of public trust. This fits the definition of an AI Incident as the AI system's use has directly led to harm (violation of rights and harm to community trust). The involvement is in the use of AI in the process of responding to public reports, and the harm is realized, not just potential. The article also mentions administrative responses, but the primary focus is on the misuse of AI causing harm, not just the response, so it is not Complementary Information. Hence, the classification is AI Incident.
Thumbnail Image

Laporan ke JAKI Direspons Foto AI, DPRD DKI: Jangan Bikin Laporan Asal Bapak Senang

2026-04-06
KOMPAS.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to create manipulated photos as false evidence in official reports responding to public complaints. This misuse of AI directly leads to harm by deceiving the public and obstructing proper resolution of issues, which is a violation of rights and harms community trust. The AI system's role is pivotal in generating the falsified evidence. Therefore, this event meets the criteria for an AI Incident due to realized harm caused by AI misuse in public service reporting.
Thumbnail Image

Heboh Laporan Warga Direspons Foto AI, Netizen Bandingkan JAKI Masa Ahok dan Anies - Harian Terbit

2026-04-06
harianterbit.com
Why's our monitor labelling this an incident or hazard?
The system JAKI is an AI-involved system as it used AI-generated or AI-manipulated photos in responding to citizen reports. This use of AI directly led to harm in the form of public dissatisfaction and loss of trust in the official complaint system, which is harm to communities. The article states the harm has materialized, with public backlash and official response, meeting the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Terlalu! Laporan Warga Diduga Dibalas Pakai AI, Gubernur Pramono Berang - Harian Terbit

2026-04-06
harianterbit.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to create manipulated images as responses to citizen reports, which misleads the public and breaches the fundamental principles of transparency and honesty in public service. This use of AI directly leads to harm in the form of misinformation and erosion of trust, which falls under harm to communities and violations of rights. The involvement of AI in producing deceptive content that affects public trust and service integrity meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Pramono Perintahkan Usut Foto AI di JAKI, Lurah Kalisari Dinonaktifkan Sementara

2026-04-07
tvonenews.com
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved as the photos in question are AI-generated and used within a government digital reporting system. The event stems from the use (and possible misuse) of AI-generated content in official public service reports, raising concerns about data manipulation and integrity. Although no direct physical harm or legal violation is explicitly stated, the manipulation of official reports constitutes a violation of obligations under applicable law intended to protect fundamental rights such as transparency and accountability in public administration. This qualifies as an AI Incident because the AI-generated content has directly led to harm in terms of undermining the credibility and integrity of public service reporting, prompting official investigations and administrative actions.
Thumbnail Image

Curhat Warga Jakarta yang Keluhkan Aplikasi JAKI, Laporkan Parkir Liar tapi Diduga Hasilnya Hanya Jepretan AI - Suara Merdeka Pekalongan

2026-04-05
Tutup Tahun di Lokasi Bencana, Prabowo Cek Langsung Jembatan Sungai Garoga Pastikan Akses Terbuka - Suara Merdeka Pekalongan
Why's our monitor labelling this an incident or hazard?
The application JAKI is an AI system or uses AI to generate or manipulate images as evidence. The event involves the use of AI-generated manipulated images that mislead users about the enforcement of illegal parking, which is a form of harm to the community and a violation of rights related to transparency and truthful information. The harm is realized as the manipulated AI content directly misleads citizens, constituting an AI Incident. The event is not merely a potential risk but an actual occurrence of AI misuse causing harm.
Thumbnail Image

Kasus JAKI Berbuntut Panjang, Pramono Minta Lurah hingga Diskominfotik Diperiksa

2026-04-06
VOI - Waktunya Merevolusi Pemberitaan
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI to generate fake images as part of the response to citizen complaints, which misrepresents the actual situation and undermines transparency in public service. This constitutes a violation of rights related to truthful information and public accountability, thus meeting the criteria for an AI Incident. The AI system's misuse directly led to harm by deceiving citizens and officials, damaging trust and governance. Therefore, this is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Pramono Bakal Periksa Lurah Kalisari Terkait Laporan Warga Ditindaklanjuti dengan Foto AI |Republika Online

2026-04-06
Republika Online
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved as AI-generated photos were used to handle citizen reports. The misuse of AI-generated content to falsely represent actions taken on reports constitutes a breach of trust and transparency, which can be interpreted as a violation of rights and ethical obligations by public officials. Although no physical harm or direct legal violation is explicitly stated, the deceptive use of AI in public service impacts the integrity of governance and citizens' rights to accurate information. Therefore, this event qualifies as an AI Incident due to the realized harm in terms of violation of rights and breach of obligations related to transparency and honesty in public administration.
Thumbnail Image

Foto AI Penanganan Parkir Liar Viral, Lurah Kalisari Minta Maaf

2026-04-06
cf.febriyanto.io
Why's our monitor labelling this an incident or hazard?
The article describes the use of AI to create a manipulated image that falsely represented the parking situation, which is a misuse of AI-generated content. While this led to public backlash and disciplinary measures, the event does not describe any realized harm such as injury, rights violations, or other significant harms directly or indirectly caused by the AI system. The incident is more about reputational and trust issues rather than concrete harm. Therefore, it does not meet the criteria for an AI Incident or AI Hazard. It is best classified as Complementary Information because it provides context on societal and governance responses to AI misuse and the resulting public and official reactions.
Thumbnail Image

Lurah Kalisari Minta Maaf soal Respons JAKI Pakai Foto AI, Petugas PPSU Disanksi SP1

2026-04-06
VOI - Waktunya Merevolusi Pemberitaan
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated images used in a misleading way in official reporting, which is a misuse of AI technology. The resulting harm is to public trust and credibility, which is a form of harm to communities but not clearly articulated as a violation of rights or causing injury or property harm. The sanctioning of the officer and the apology indicate recognition of the issue but no direct or indirect harm as defined for an AI Incident. Therefore, this is best classified as Complementary Information, as it provides context on societal and governance responses to AI misuse rather than describing a new AI Incident or Hazard.
Thumbnail Image

Pramono Sentil Pejabat Pemprov DKI soal Laporan JAKI: Jangan Bohongi Warga dengan AI

2026-04-06
KOMPAS.com
Why's our monitor labelling this an incident or hazard?
An AI system is involved as the photos used as evidence are allegedly AI-generated or AI-edited. The misuse of AI in this context leads to harm by deceiving citizens and undermining the integrity of public service responses, which constitutes a violation of rights and harm to communities. Since the harm is realized and directly linked to the AI system's misuse, this qualifies as an AI Incident.
Thumbnail Image

Foto AI Dipakai untuk 'Mengelabui' Keluhan Warga, Praktik 'Asal Bapak Senang' di Pemprov Jakarta? |Republika Online

2026-04-06
Republika Online
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI to edit photos that were presented as evidence of problem resolution, which was false. This misuse of AI led to misinformation and a breach of public trust, harming the community's right to accurate information and effective governance. The AI system's role is pivotal in creating the false impression, thus meeting the criteria for an AI Incident involving violations of rights and harm to communities.
Thumbnail Image

Petugas PPSU Kelurahan Kalisari yang Gunakan Foto AI untuk Respons Aduan Warga Dapat SP1 |Republika Online

2026-04-06
Republika Online
Why's our monitor labelling this an incident or hazard?
An AI system was involved in generating a misleading photo, which is a misuse of AI technology. However, the event does not describe any realized harm such as injury, rights violations, or significant community harm. The main focus is on the administrative response (issuing a warning) and the public apology, which are governance and societal responses to an AI misuse case. Hence, it fits the definition of Complementary Information rather than an AI Incident or AI Hazard.