JKT48's Freya Reports AI-Generated Inappropriate Image Manipulation to Police

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Freya Jayawardana of JKT48 reported to Jakarta police the misuse of AI technologies, including Grok and Face Swap, to manipulate her photos into inappropriate content on social media. The incident caused reputational harm and distress, prompting a police investigation into the AI-driven image manipulation.[AI generated]

Why's our monitor labelling this an incident or hazard?

The report describes a case where AI was allegedly used to manipulate data in online posts that impersonate the victim, causing harm. The involvement of AI in the misuse and the resulting harm to the victim's rights and reputation qualifies this as an AI Incident. The police calling the victim for clarification is part of the investigation process, not a separate category. Therefore, this event is best classified as an AI Incident.[AI generated]
AI principles
Privacy & data governanceRespect of human rights

Industries
Media, social platforms, and marketing

Affected stakeholders
Women

Harm types
ReputationalPsychological

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Polisi panggil Freya JKT48 soal laporan penyalahgunaan AI

2026-03-11
Antara News Kepri
Why's our monitor labelling this an incident or hazard?
The report describes a case where AI was allegedly used to manipulate data in online posts that impersonate the victim, causing harm. The involvement of AI in the misuse and the resulting harm to the victim's rights and reputation qualifies this as an AI Incident. The police calling the victim for clarification is part of the investigation process, not a separate category. Therefore, this event is best classified as an AI Incident.
Thumbnail Image

Perkara Foto Freya JKT48 Dimanipulasi Pakai AI Diusut Polisi

2026-03-12
detik News
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI technology to manipulate images, which constitutes the use of an AI system. The harm caused is a violation of personal rights and emotional distress, fitting under harm to a person or groups of people. Since the misuse has already occurred and a formal complaint has been filed, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Freya JKT48 Laporkan Akun Manipulasi Foto Pakai AI ke Polisi

2026-03-11
IDN Times
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI systems (Grok and Swap) to manipulate images, which constitutes misuse of AI technology. The harm involves violation of personal rights and manipulation of data, which falls under violations of human rights or breach of applicable law protecting fundamental rights. Since the AI system's misuse has directly led to harm (manipulated images causing potential reputational damage), this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Freya JKT48 Lapor Polisi! Geram Fotonya Dimanipulasi Pakai AI Grok

2026-03-11
VIVA.co.id
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (Grok and Swap) used to manipulate photos, which were then spread on social media, causing harm to the individual by misrepresenting her image. This constitutes a violation of rights and electronic data manipulation under law, fulfilling the criteria for an AI Incident. The harm is realized (not just potential), and the AI system's misuse is central to the incident. Therefore, this is classified as an AI Incident.
Thumbnail Image

Begini Foto Freya JKT48 yang Diedit Vulgar Pakai AI sampai Dilaporkan ke Polisi

2026-03-11
VIVA.co.id
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI technology to create manipulated images that harm an individual's personal and reputational rights, which falls under violations of human rights or breaches of applicable law protecting fundamental rights. The harm has already occurred as the manipulated content was posted and caused damage, leading to police involvement. Therefore, this qualifies as an AI Incident due to direct harm caused by AI-enabled manipulation.
Thumbnail Image

Polisi panggil Freya JKT48 soal laporan penyalahgunaan AI pada Kamis

2026-03-11
ANTARA News - The Indonesian News Agency
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI for data manipulation and posting content that impersonates and harms a victim, which constitutes a violation of rights and harm to the individual. The AI system's misuse has directly led to harm, fulfilling the criteria for an AI Incident. The police investigation and victim report confirm that harm has materialized, not just a potential risk. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Freya JKT48 Dipanggil Polisi, Terkait Kasus Apa?

2026-03-11
Liputan 6
Why's our monitor labelling this an incident or hazard?
The article references AI misuse and a police investigation, indicating the involvement of an AI system. However, it does not describe any realized harm or specific incident caused by the AI system, nor does it detail a plausible future harm. The focus is on the legal process of summoning a person for questioning, which is an update on an ongoing matter rather than a report of a new incident or hazard. Therefore, this qualifies as Complementary Information, providing context and updates related to AI misuse without reporting a new AI Incident or AI Hazard.
Thumbnail Image

Terkait Laporannya, Freya JKT48 Jalani Pemeriksaan Hari Ini

2026-03-11
Jawa Pos National Network
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions misuse of AI leading to manipulated social media posts impersonating Freya JKT48, causing reputational harm. The AI system's use in generating or manipulating content directly led to harm to the individual, fulfilling the criteria for an AI Incident. The event is not merely a potential risk or a complementary update but a reported case of realized harm due to AI misuse.
Thumbnail Image

Freya JKT48 Sosok Kapten Tangguh di Balik Laporan Manipulasi AI

2026-03-12
Media Indonesia - News & Views -
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-based photo manipulation (deepfake) used to harm Freya's image, which is a direct use of an AI system causing harm to an individual's rights and reputation. The legal action taken and the harm described meet the criteria for an AI Incident, as the AI system's use has directly led to harm (violation of rights).
Thumbnail Image

Kronologi Freya JKT48 Laporkan Manipulasi Foto AI Bukti Sejak 2022

2026-03-12
Media Indonesia - News & Views -
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of generative AI to create manipulated photos (deepfakes) that damage Freya's reputation, constituting harm to the individual. The involvement of AI in producing harmful content that violates rights and leads to legal proceedings meets the criteria for an AI Incident. The harm is realized, not just potential, and the AI system's misuse is central to the event.
Thumbnail Image

Lawan Deepfake AI, Freya JKT48 Gunakan UU ITE Baru Jerat Pelaku Manipulasi Foto

2026-03-12
Media Indonesia - News & Views -
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI systems (chatbot AI generating deepfake images) to create manipulated, harmful content. This use of AI has directly led to harm to the person (reputational damage and violation of personal rights) and is subject to legal action under relevant laws. Therefore, it meets the criteria for an AI Incident because the AI system's use has directly caused harm and legal violations.
Thumbnail Image

Besok, Freya JKT48 Diperiksa Dugaan Penyalahgunaan Teknologi AI

2026-03-11
beritasatu.com
Why's our monitor labelling this an incident or hazard?
The article describes a case where AI technology is allegedly misused to create social media posts impersonating a victim, which constitutes a violation of rights and harm to the individual. The AI system's role is central to the harm, as it is used to manipulate data and create misleading content. Since the harm has occurred and a formal complaint has been filed, this qualifies as an AI Incident under the framework, specifically under violations of human rights or breach of obligations protecting fundamental rights.
Thumbnail Image

Kronologi Freya JKT48 Laporkan Manipulasi Foto AI Tak Senonoh

2026-03-11
beritasatu.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI tools to manipulate photos into non-consensual, indecent content, which harms the individual's rights and personal dignity. The harm is realized and ongoing, with evidence collected over multiple years and a formal legal complaint filed. The AI system's use is central to the harm, fulfilling the criteria for an AI Incident. This is not merely a potential risk or a complementary update but a concrete case of harm caused by AI misuse.
Thumbnail Image

Freya JKT48 Minta Penundaan Pemeriksaan usai Laporkan Penyalahgunaan Foto Pakai Teknologi Kecerdasan

2026-03-12
Warta Kota
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI technologies (Grok and Face Swap) to manipulate photos, which is a direct use of AI systems. The harm involves violation of personal rights through manipulated images, which falls under violations of human rights or breach of applicable laws protecting individual rights. Since the misuse has already occurred and led to a police report, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Foto Diedit Tak Senonoh Pakai Grok AI, Freya JKT48 Lapor Polisi

2026-03-12
Tribun Jakarta
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI to create manipulated, inappropriate images of a person, which is a direct violation of personal rights and can be considered harm to the individual. The AI system's use in generating these images is central to the incident, and the harm is realized as the manipulated content is already circulating. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's misuse.
Thumbnail Image

Hari Ini, Polres Jakarta Selatan Periksa Freya JKT48 Soal Kasus Manipulasi Foto - Tribunjakarta.com

2026-03-12
Tribun Jakarta
Why's our monitor labelling this an incident or hazard?
The event describes the use of AI to create manipulated, inappropriate images of a person without consent, which constitutes a violation of rights and harm to the individual. The AI system's role in generating these images is explicit and directly linked to the harm caused. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to harm (violation of rights and reputational damage).
Thumbnail Image

Pemeriksaan Freya JKT48 Terkait Penyalahgunaan AI Hari Ini Ditunda

2026-03-12
detik News
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of an AI system (Grok AI) to edit photos in a way that causes harm to a person's dignity and privacy, which is a violation of rights under applicable law. The harm is realized as the victim has reported feeling uncomfortable and harmed by the manipulated images posted online. The police investigation confirms the misuse and harm caused by the AI system's outputs. Hence, this is an AI Incident involving the use and misuse of an AI system leading to harm to an individual's rights and reputation.
Thumbnail Image

Polisi Dalami Laporan Freya JKT48 soal Laporan Dugaan Penyalahgunaan AI

2026-03-11
detik hot
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI technology to manipulate images, which is a clear involvement of an AI system. The misuse of AI to create harmful content that negatively affects a person constitutes a potential violation of rights and harm to the individual. Since the case is under investigation and the harm is alleged but not yet legally confirmed or fully detailed, this situation represents a plausible risk of harm stemming from AI misuse. Therefore, it fits the definition of an AI Hazard rather than an AI Incident, as the harm is not yet fully established or confirmed.
Thumbnail Image

Awal Mula Freya JKT48 Tahu Fotonya Dimanipulasi AI hingga Lapor Polisi

2026-03-11
detik News
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI to manipulate photos in a harmful way, causing reputational and personal harm to Freya. This constitutes a violation of rights and harm to the individual, which aligns with the definition of an AI Incident. The involvement of AI in the manipulation and the resulting harm is clear and direct, and the case is being formally investigated by authorities.
Thumbnail Image

Freya JKT48 Besok Beri Klarifikasi soal Laporan Dugaan Penyalahgunaan AI

2026-03-11
detik hot
Why's our monitor labelling this an incident or hazard?
The article involves an AI system in the context of alleged misuse (likely AI-generated manipulated video content). However, the event is currently limited to a report and the start of an investigation without evidence of actual harm or incident outcomes. Since the potential harm is plausible but not yet realized or detailed, this situation fits the definition of an AI Hazard rather than an AI Incident. It is not Complementary Information because the main focus is the initial report and investigation, not a follow-up or response to a known incident. Therefore, the classification is AI Hazard.
Thumbnail Image

Besok, Freya JKT48 Diperiksa Polisi soal Laporan Terkait Foto AI

2026-03-11
detik News
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used to edit photos into inappropriate content, which directly harmed the individual by violating her rights and causing distress. The police investigation confirms that the harm is realized and not just potential. The AI system's use in creating manipulated images that infringe on personal rights fits the definition of an AI Incident, as it involves violations of human rights and harm to the individual. Hence, the classification is AI Incident.
Thumbnail Image

Fotonya Diedit Pakai Grok AI Jadi Tak Senonoh, Freya JKT48 Lapor Polisi

2026-03-11
detik News
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system (Grok AI) to create manipulated images that caused harm to a person's reputation and emotional well-being. This constitutes a violation of personal rights and is a clear harm caused by the AI system's misuse. The harm has already occurred, and legal action is being taken, confirming the realized impact. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Freya JKT48 Laporkan Dugaan Penyalahgunaan AI ke Polres Jaksel

2026-03-11
detik hot
Why's our monitor labelling this an incident or hazard?
An AI system is reasonably inferred because the manipulation of identity on social media likely involves AI technologies such as deepfakes or AI-generated content. The event involves the use of AI (or AI-enabled tools) to create manipulated posts impersonating a person, which constitutes a violation of rights and harm to the individual. Since the harm (identity manipulation and reputational damage) has already occurred and is under investigation, this qualifies as an AI Incident under the framework, as the AI system's use has directly or indirectly led to harm to a person (violation of rights).
Thumbnail Image

Freya JKT48 Lapor Polisi Soal Dugaan Manipulasi Foto Tak Senonoh

2026-03-11
KOMPAS.com
Why's our monitor labelling this an incident or hazard?
The AI system 'Grok' was used by unknown individuals to create manipulated, inappropriate images of Freya, which is a direct misuse of AI technology causing harm to her personal rights and dignity. This meets the criteria for an AI Incident because the AI system's use has directly led to harm (violation of rights and reputational harm). The report to police confirms the harm is realized, not just potential. Therefore, this event is classified as an AI Incident.
Thumbnail Image

Freya JKT48 Akan Dipanggil Polisi Soal Kasus Penyalahgunaan AI pada Kamis

2026-03-11
KOMPAS.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions alleged misuse of AI technology to create social media posts impersonating the victim, causing harm. The involvement of AI in manipulating data and generating inappropriate content that harms the victim's reputation and causes them to file a police report fits the definition of an AI Incident, as harm to the individual has occurred due to the AI system's use. The ongoing investigation and police summons further confirm the seriousness and realized nature of the harm.
Thumbnail Image

Freya JKT48 Minta Tunda Klarifikasi Soal Laporan Manipulasi Foto Tak Senonoh

2026-03-12
KOMPAS.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI (artificial intelligence) to manipulate photos in a way that harms the individual by creating indecent images without consent. This constitutes a violation of personal rights and causes harm to the individual, fitting the definition of an AI Incident. The involvement of the AI system in generating manipulated content that leads to harm is direct and material. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Pemeriksaan Freya JKT48 Hari Ini Ditunda

2026-03-12
KOMPAS.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI systems (Grok and Face Swap) for photo manipulation, which is a misuse of AI technology leading to harm (violation of rights through manipulated images). The report has been filed, indicating the harm has occurred or is ongoing. Although the current news is about the postponement of the police summons, the underlying event is an AI Incident due to the misuse of AI causing harm. Therefore, the classification is AI Incident.
Thumbnail Image

Freya JKT48 Laporkan Akun yang Diduga Gunakan AI Grok untuk Manipulasi Foto

2026-03-11
KOMPAS.com
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of an AI system (Grok) to manipulate images without consent, causing harm to the individual’s rights and privacy. The victim has filed a formal complaint, and the police are investigating, confirming that harm has occurred due to the AI system's misuse. This meets the criteria for an AI Incident as the AI system's use has directly led to a violation of rights and harm to the individual.
Thumbnail Image

Fotonya Dimanipulasi AI Grok Secara Tak Pantas, Freya JKT48 Tegas Lapor Polisi

2026-03-11
tvonenews.com
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI technology (Grok and Swap) to manipulate images in a harmful way, causing violations of rights and harm to the individual involved. This fits the definition of an AI Incident because the AI system's use directly led to harm (violation of rights and reputational damage). The police report and ongoing investigation confirm the harm has occurred and is being addressed legally. Therefore, this is classified as an AI Incident.
Thumbnail Image

Polisi Panggil Freya JKT48 Terkait Kasus Dugaan Manipulasi AI

2026-03-12
Tabloidbintang.com
Why's our monitor labelling this an incident or hazard?
The article describes an AI system being used to create manipulated content that falsely implicates a person, causing reputational damage. This constitutes a violation of rights and harm to the individual, fitting the definition of an AI Incident. The involvement of AI in generating the harmful content and the resulting legal action confirm that this is not merely a potential hazard or complementary information but an actual incident where AI use has led to harm.
Thumbnail Image

Freya JKT48 Dijadwalkan Beri Klarifikasi ke Polisi Terkait Dugaan Penyalahgunaan AI

2026-03-11
Kabarin.com
Why's our monitor labelling this an incident or hazard?
The report involves an AI system used to manipulate social media posts impersonating a victim, which constitutes a violation of rights and harm to the individual. The police investigation and formal complaint indicate that harm has materialized. Therefore, this qualifies as an AI Incident because the AI system's misuse has directly or indirectly led to harm (violation of rights and reputational damage).
Thumbnail Image

Freya JKT48 Batal Diperiksa, Begini Penjelasan Polisi

2026-03-12
Liputan 6
Why's our monitor labelling this an incident or hazard?
The article mentions a report of AI misuse and a police investigation, indicating possible concerns about AI-related harm. However, no direct or indirect harm caused by an AI system is described, nor is there evidence of an AI system malfunction or misuse leading to realized harm. The event is about an investigation and a postponed examination, which suggests a potential risk or concern but no confirmed incident. Thus, it fits the category of Complementary Information as it provides context and updates on an AI-related investigation without confirming an AI Incident or Hazard.
Thumbnail Image

Polisi Tunda Periksa Freya JKT48 Soal Dugaan Penyalahgunaan AI

2026-03-12
beritasatu.com
Why's our monitor labelling this an incident or hazard?
The article involves an AI system used to generate or manipulate social media content that allegedly harms a person's reputation, which fits the definition of an AI Incident due to violation of rights (potential defamation). However, since the case is still under investigation and no confirmed harm or legal ruling has been established, the event is best classified as Complementary Information providing an update on a developing AI-related legal matter rather than a confirmed AI Incident or Hazard.
Thumbnail Image

Kronologi Freya JKT48 Polisikan Penyalahgunaan AI, Fotonya Dimanipulasi Jadi Syur Sejak 2022

2026-03-12
Warta Kota
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI technology (specifically AI features like Grok and Swap) to manipulate photos of Freya JKT48 into sexually explicit content, which has been distributed on social media. This manipulation constitutes a violation of personal rights and harms the individual, fulfilling the criteria for harm to a person or group. The harm is realized and ongoing, as the case spans multiple years and has led to a formal police report and investigation. The AI system's misuse directly led to this harm, making it an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Polisi Masih Selidiki Laporan Freya JKT48 soal Foto Diduga Dimanipulasi AI

2026-03-12
KOMPAS.com
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI technology to manipulate photos, which constitutes the involvement of an AI system. The manipulation of photos without consent can be considered a violation of personal rights, potentially causing harm to the individual. Since the police are still investigating and no confirmed harm or incident has been established, this situation represents a plausible risk of harm rather than a confirmed incident. Therefore, it fits the definition of an AI Hazard, as the AI system's use could plausibly lead to harm but has not yet been confirmed as causing harm.
Thumbnail Image

Fakta Laporan Polisi Freya JKT48 Soal Foto Dimanipulasi AI

2026-03-13
KOMPAS.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI technology to manipulate photos, which is an AI system involvement. The manipulation has caused harm to the individual by misrepresenting her image, which constitutes a violation of rights. The legal complaint and police investigation confirm that harm has occurred and is being addressed. Therefore, this qualifies as an AI Incident due to realized harm stemming from AI misuse.
Thumbnail Image

Polisi Ungkap Awal Mula Freya JKT48 Tahu Fotonya Dimanipulasi AI

2026-03-12
KOMPAS.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-based manipulation of Freya's photos, indicating the involvement of an AI system. The misuse of AI to create manipulated images that harm the individual's reputation and privacy constitutes a violation of rights, fulfilling the criteria for harm under the AI Incident definition. The event involves the use of AI (misuse) leading to realized harm, and the police investigation confirms the seriousness of the incident. Hence, it is classified as an AI Incident.
Thumbnail Image

Detik-detik Freya JKT48 Geram setelah Tahu Fotonya Dimanipulasi AI, Kini Berujung Laporkan ke Polisi

2026-03-12
tvonenews.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI technology to manipulate images of a person in a harmful and non-consensual manner, leading to reputational and emotional harm. The AI system's use directly caused the harm, fulfilling the criteria for an AI Incident under violations of human rights and harm to communities. The police investigation and formal complaint further confirm the materialization of harm rather than a potential risk or complementary information.
Thumbnail Image

Foto Freya JKT48 Dimanipulasi Jadi Tak Senonoh, Sang Laporkan Penyalahgunaan AI - Pos-kupang.com

2026-03-13
Pos Kupang
Why's our monitor labelling this an incident or hazard?
The event clearly describes the use of AI systems (Grok and Swap) to manipulate images in a harmful way, constituting a violation of personal rights and reputational harm. The harm has already occurred and is ongoing, as the manipulated content has been distributed since 2022. The involvement of AI in creating the manipulated content is explicit, and the harm is direct and significant. Hence, this qualifies as an AI Incident under the framework.