Johor Teen Arrested for AI-Edited Obscene Images

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A 16-year-old boy in Johor was arrested on April 8 for allegedly using AI technology to edit, distribute, and sell lewd images of schoolmates, obtained from social media. Authorities seized his mobile phone and are urging further victim reports as investigations continue.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves the use of an AI system to generate harmful, non-consensual sexualized images of individuals, which constitutes a violation of rights and harm to communities. The AI system's use directly led to realized harm through the creation and distribution of these images. This fits the definition of an AI Incident because the AI system's use caused direct harm to persons and communities, including minors, and legal action is underway.[AI generated]
AI principles
AccountabilityPrivacy & data governanceRespect of human rightsSafety

Industries
Media, social platforms, and marketing

Affected stakeholders
Children

Harm types
PsychologicalHuman or fundamental rights

Severity
AI incident

AI system task:
Content generation

In other databases

Articles about this incident or hazard

Thumbnail Image

16-year-old expelled for selling lewd AI edited images

2025-04-11
thesun.my
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system to generate harmful, non-consensual sexualized images of individuals, which constitutes a violation of rights and harm to communities. The AI system's use directly led to realized harm through the creation and distribution of these images. This fits the definition of an AI Incident because the AI system's use caused direct harm to persons and communities, including minors, and legal action is underway.
Thumbnail Image

More reports lodged against Johor teen over AI-doctored lewd pics, says state police chief

2025-04-12
The Star
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI to create doctored images that are lewd and sold without consent, directly harming the victims' privacy and dignity. This constitutes a violation of human rights and harm to individuals, fitting the definition of an AI Incident. The police investigation and school expulsion further confirm the harm has materialized.
Thumbnail Image

Johor teenager nabbed for allegedly creating, selling lewd AI pics of schoolmates

2025-04-09
The Star
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI to create lewd, deepfake images of minors and others, which were then distributed and sold, causing direct harm to the victims. This fits the definition of an AI Incident as the AI system's use has directly led to violations of rights and harm to individuals and communities. The involvement of law enforcement and ongoing investigation further supports the classification as an incident rather than a hazard or complementary information.
Thumbnail Image

Johor teen held for editing and selling AI-generated obscene images

2025-04-09
thesun.my
Why's our monitor labelling this an incident or hazard?
The article explicitly states that a teenager used AI technology to edit images to create obscene content featuring victims' faces, which were then sold online. This constitutes a violation of human rights and privacy, fulfilling the criteria for an AI Incident. The AI system's use directly led to harm to individuals, and multiple victims have been identified. Therefore, this event qualifies as an AI Incident due to the realized harm caused by the AI system's misuse.
Thumbnail Image

Johor Teen Held For Editing And Selling AI-generated Obscene Images - Police

2025-04-09
BERNAMA
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system to generate and manipulate obscene images, which were then distributed and sold, directly causing harm to the victim's rights and dignity. This constitutes a violation of human rights and a breach of applicable laws protecting privacy and personal rights, fitting the definition of an AI Incident.
Thumbnail Image

Cops nab Johor teen for selling deepfake porn of schoolmates

2025-04-09
Malay Mail
Why's our monitor labelling this an incident or hazard?
The suspect used AI technology to generate deepfake pornographic images, which were then sold and distributed, directly causing harm to the victims through violation of their rights and reputational damage. The involvement of AI in creating manipulated content without consent and its distribution constitutes a clear AI Incident under the framework, as it has directly led to violations of human rights and harm to individuals and communities.
Thumbnail Image

16-year-old boy who made AI porn of schoolmates nabbed

2025-04-10
The Star
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI (deepfake technology) to create non-consensual pornographic images, which is a direct violation of individuals' rights and causes harm to the victims. The AI system's use here directly led to harm (violation of privacy, distribution of obscene materials) and is under criminal investigation. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to harm to persons and communities.
Thumbnail Image

16-year-old student in Malaysia arrested for creating and selling lewd AI-generated deepfake of schoolmates

2025-04-10
The Online Citizen
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI to create deepfake images, which are manipulated visual content generated by AI systems. The creation and sale of these explicit AI-generated images have directly led to psychological and emotional harm to the victims, including trauma, anxiety, and feelings of virtual assault. This constitutes a violation of human rights and privacy, fulfilling the criteria for an AI Incident. The AI system's use in this malicious manner is central to the harm caused, and the event describes realized harm rather than potential harm. Therefore, the classification as an AI Incident is appropriate.
Thumbnail Image

More police reports lodged against Johor teen over AI-doctored lewd pics

2025-04-12
Today Headline
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI to create doctored lewd images (deepfakes) of victims, which were then sold, causing harm to the victims' privacy and dignity. The harm is realized and ongoing, with multiple police reports and investigations. The AI system's use here is central to the harm, fulfilling the criteria for an AI Incident involving violations of rights and harm to communities. Therefore, this event is classified as an AI Incident.
Thumbnail Image

Deepfake scandal widens

2025-04-12
The Star
Why's our monitor labelling this an incident or hazard?
The article explicitly states that a teenager used AI to create and distribute explicit deepfake images of schoolmates, including minors as young as 12 or 13 years old. This misuse of AI has directly caused harm to the victims, including violations of their rights and safety, which fits the definition of an AI Incident under violations of human rights and harm to individuals. The involvement of AI in generating deepfake content that is sexually explicit and non-consensual is a clear case of AI misuse leading to realized harm. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

宽柔女生勇敢站出来

2025-04-12
東方網 馬來西亞東方日報
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of deepfake technology to create non-consensual explicit images of female students, which is an AI system generating harmful content. The harm is realized as the victims have suffered violations of their rights and personal harm. Therefore, this qualifies as an AI Incident because the AI system's use directly led to harm to persons (violation of rights and psychological harm).
Thumbnail Image

【网传学生散播合成不雅照牟利】柔教育局今早开会讨论 最新进展将汇报

2025-04-10
東方網 馬來西亞東方日報
Why's our monitor labelling this an incident or hazard?
The use of deepfake technology to create synthetic explicit images is an AI system application. The distribution of these images for profit causes harm to the victims' rights and dignity, constituting an AI Incident under violations of human rights or breach of obligations to protect fundamental rights. The event describes realized harm and ongoing investigation, fitting the AI Incident classification.
Thumbnail Image

助理已陪7人举报不雅照,张念群称警申请延扣嫌犯

2025-04-09
Malaysiakini.com
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of deepfake technology, which is an AI system capable of generating synthetic images. The malicious use of this AI system to create and spread non-consensual explicit images has caused direct harm to the victims, including minors, constituting violations of personal rights and harm to individuals and communities. The involvement of law enforcement and ongoing investigation further supports the classification as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

涉售卖合成不雅照,警方在柔逮捕一名少年

2025-04-09
Malaysiakini.com
Why's our monitor labelling this an incident or hazard?
The event describes the creation and sale of synthetic explicit images (likely deepfakes) using photos stolen from social media, which implies the use of AI or AI-enabled tools for image synthesis. The harm includes violation of privacy and potential psychological and reputational harm to the victims, which fits under harm to persons and communities. The police intervention and arrest confirm that the harm is realized, not just potential. Therefore, this qualifies as an AI Incident because the AI system's use directly led to harm.
Thumbnail Image

勿轻忽学生身心安全,董总促携手应对网路性暴力

2025-04-10
Malaysiakini.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI-based image synthesis technology to create non-consensual explicit images of students, which is a form of online sexual violence causing harm to individuals' mental health and rights. This fits the definition of an AI Incident because the AI system's misuse has directly led to harm to persons (students). The call for joint prevention and response further confirms the recognition of harm caused by AI misuse.
Thumbnail Image

女研社声援宽中受害者 警惕AI成为网络性暴力工具

2025-04-11
東方網 馬來西亞東方日報
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI deepfake technology to fabricate and spread sexually explicit content without consent, which is a direct violation of victims' rights and constitutes sexual violence. The harm is realized as victims have reported the abuse, and the AI system's misuse is central to the incident. Therefore, this qualifies as an AI Incident due to direct harm caused by the AI system's malicious use.
Thumbnail Image

【专家点评】散播不雅照牟利涉案者虽未成年 律师:不代表能逃过法律制裁

2025-04-11
東方網 馬來西亞東方日報
Why's our monitor labelling this an incident or hazard?
The article describes a case where a minor used deepfake AI technology to create and spread non-consensual explicit images of female students, causing harm to the victims. The AI system's use directly led to violations of rights and personal harm, triggering legal action. This fits the definition of an AI Incident because the AI system's use has directly led to harm to individuals and violations of their rights. The legal and social responses further confirm the seriousness of the incident.
Thumbnail Image

【网传学生散播合成不雅照牟利】初一女学生揭小学遭深伪制图 涉案男学生已录口供

2025-04-11
東方網 馬來西亞東方日報
Why's our monitor labelling this an incident or hazard?
Deepfake technology is an AI system capable of generating synthetic images. The creation and dissemination of non-consensual explicit deepfake images constitute a violation of personal rights and cause harm to the victim. Since the event describes actual harm caused by the AI system's use (deepfake generation and distribution), it qualifies as an AI Incident under the definitions provided.
Thumbnail Image

黄瑞泰:家长式思维 反害了校方

2025-04-12
東方網 馬來西亞東方日報
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI-enabled technology (image synthesis) to create harmful content (non-consensual explicit images) that directly harms individuals (students) and the school community. This constitutes a violation of rights and harm to communities, fitting the definition of an AI Incident. The article focuses on the incident and its consequences rather than on potential future harm or responses, so it is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

【网传学生散播合成不雅照牟利】迄今接获22投报 警查嫌犯是否有共犯

2025-04-12
東方網 馬來西亞東方日報
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI deepfake technology to create synthetic explicit images without consent, which are then disseminated and sold, causing harm to the victims' privacy and dignity. This constitutes a violation of human rights and harm to communities. The involvement of AI in the creation of these images and the resulting harm meets the criteria for an AI Incident, as the AI system's use directly leads to harm.
Thumbnail Image

【网传学生散播合成不雅照牟利】廖彩彤促勿转发下载不雅照 避免受害者二度伤害

2025-04-12
東方網 馬來西亞東方日報
Why's our monitor labelling this an incident or hazard?
The article describes an incident where AI-generated synthetic explicit images (deepfakes) of students are being spread online, causing harm to the victims. The use of AI to create these images and their distribution has led to violations of personal rights and digital sexual violence, which are harms to individuals and communities. The involvement of AI in the creation of these images and the resulting harm meets the criteria for an AI Incident. The article also references legal frameworks addressing such harms, reinforcing the recognition of realized harm due to AI misuse.
Thumbnail Image

【网传学生散播合成不雅照牟利】6受害者拒绝沉默 有者非首次遭遇性骚扰

2025-04-12
東方網 馬來西亞東方日報
Why's our monitor labelling this an incident or hazard?
The article describes an incident where male students used deepfake AI technology to create and spread non-consensual explicit images of female students for profit. This use of AI directly caused harm to the victims, including violations of their rights and psychological harm, fitting the definition of an AI Incident under violations of human rights and harm to communities. Therefore, this event is classified as an AI Incident.
Thumbnail Image

【学生盗图制不雅照牟利】38受害者包括校友与学生 不接受道歉并提4诉求

2025-04-12
東方網 馬來西亞東方日報
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the creation and distribution of AI-generated deepfake explicit images (deepfake non-consensual pornography) targeting students and alumni, which is a clear violation of human rights and privacy. The harm is realized as victims have been identified and are seeking justice. The AI system's use in generating these images is central to the harm caused. Therefore, this qualifies as an AI Incident due to direct harm to individuals and communities through AI-enabled malicious content creation and dissemination.
Thumbnail Image

【学生盗图制不雅照牟利】张念群促勿牵连古来宽中师生 吁私校包括独中制定性侵应对SOP

2025-04-12
東方網 馬來西亞東方日報
Why's our monitor labelling this an incident or hazard?
The event describes the use of AI-generated deepfake images to harm individuals, which is a direct violation of rights and causes harm to the victims. The AI system's use in creating synthetic indecent images and their distribution has directly led to harm. Therefore, this qualifies as an AI Incident under the framework, as it involves realized harm caused by AI-generated content affecting individuals' rights and well-being.
Thumbnail Image

【学生盗图制不雅照牟利】嫌犯C某落网关键:受害者提供电子钱包TNG码与截图

2025-04-12
東方網 馬來西亞東方日報
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of deepfake technology (an AI system) to create and spread non-consensual explicit images, which is a direct violation of human rights and causes harm to the victims. The involvement of AI in the creation of harmful content and the resulting harm to individuals and communities meets the criteria for an AI Incident. The event is not merely a potential risk but a realized harm, with police action and victim reports confirming the incident.
Thumbnail Image

你碰,你走!" 张念群:涉校园性犯罪指控教职员 应立即调离学校

2025-04-12
東方網 馬來西亞東方日報
Why's our monitor labelling this an incident or hazard?
The article involves AI only indirectly through the mention of deepfake technology used to create inappropriate images, which is a misuse of AI-generated content causing harm to individuals. However, the main harm discussed is sexual misconduct by school staff, which is a human rights violation. The AI system's role is secondary and not the direct cause of the incident but contributes to the harm through misuse. Since the article focuses on the social issue and institutional response rather than the AI system itself, it fits the definition of Complementary Information rather than an AI Incident or Hazard.
Thumbnail Image

柯福特:国家性教育与网络安全的警钟 | 评论

2025-04-13
東方網 馬來西亞東方日報
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (deepfake technology) to create harmful content (non-consensual explicit images) that has directly led to harm to individuals (privacy violations, psychological harm) and communities (social trust erosion). The AI system's use in this criminal activity is central to the harm caused, qualifying this as an AI Incident. The article discusses realized harm rather than potential harm, and the AI system's role is pivotal in the incident.
Thumbnail Image

【网传学生散播合成不雅照牟利】柔警至今接10投报 近期完成调查报告

2025-04-11
東方網 馬來西亞東方日報
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the spreading of synthetic explicit images, which are typically created using AI-based generative techniques (deepfake technology). The involvement of AI in generating these images is reasonably inferred. The harm is realized as victims have been reported and police are investigating the case. The harm includes violation of privacy and potential psychological harm to victims, which fits the definition of an AI Incident under violations of human rights and harm to persons. Therefore, this event is classified as an AI Incident.
Thumbnail Image

【网传学生散播合成不雅照牟利】初一女学生揭小学遭同学深伪制图 新山宽中陪同报案

2025-04-10
東方網 馬來西亞東方日報
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI technology (deepfake image synthesis) to create non-consensual explicit images of a student, which were then disseminated, causing harm to the victim. The AI system's use directly led to a violation of rights and harm to the individual. The involvement of the school and police confirms the harm has occurred and is being addressed. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use directly caused harm to a person.
Thumbnail Image

【网传学生散播合成不雅照牟利】鼓励受害者挺身而出 董总促校方严正因应

2025-04-10
東方網 馬來西亞東方日報
Why's our monitor labelling this an incident or hazard?
The article describes students using deepfake technology to create and spread synthetic explicit images of other students, which is an AI system's use causing direct harm to individuals' rights and psychological well-being. The harm is realized, as victims have been identified and police investigations and arrests have occurred. This fits the definition of an AI Incident because the AI system's use has directly led to violations of rights and harm to individuals and communities. The article also discusses responses and calls for protective measures, but the primary event is the realized harm caused by AI-generated synthetic images.
Thumbnail Image

【网传学生散播合成不雅照牟利】涉案学生已被开除 宽中董事会:全力配合警方调查

2025-04-10
東方網 馬來西亞東方日報
Why's our monitor labelling this an incident or hazard?
The use of deepfake technology to create non-consensual explicit images is a clear example of AI system misuse causing direct harm to individuals, specifically violations of privacy and potentially other human rights. The event describes realized harm through harassment and distribution of synthetic explicit content, meeting the criteria for an AI Incident. The involvement of AI is explicit through the mention of deepfake technology, and the harm is direct and significant. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

【网传学生散播合成不雅照牟利】柔再有受害者发声:照片仍在网传伤害未停

2025-04-10
東方網 馬來西亞東方日報
Why's our monitor labelling this an incident or hazard?
The event describes the creation and distribution of synthetic explicit images (likely AI-generated or AI-assisted deepfakes) of students, which is a direct violation of personal rights and causes harm to the victims. The AI system's involvement is inferred from the mention of '合成不雅照' (synthetic explicit photos), which typically involves AI-based image synthesis technologies. The harm includes violation of privacy, psychological harm, and threats, fulfilling the criteria for an AI Incident under violations of human rights and harm to individuals. The ongoing circulation of these images and the lack of effective institutional response further confirm the incident's severity.
Thumbnail Image

【网传学生散播合成不雅照牟利】再有受害者爆料 干案者不止一组

2025-04-09
東方網 馬來西亞東方日報
Why's our monitor labelling this an incident or hazard?
The article describes an incident where AI-generated explicit images are being distributed maliciously by perpetrators, causing harm to victims. The use of AI to create synthetic nude photos directly leads to violations of personal rights and harassment, fulfilling the criteria for an AI Incident. The involvement of AI in generating the harmful content and the realized harm to victims through distribution and harassment clearly meets the definition of an AI Incident under violations of human rights and harm to individuals.
Thumbnail Image

大马爆发"N号房"事件 张念群:绝不容许该行为滋长【东方头条】 2025-4-9

2025-04-09
東方網 馬來西亞東方日報
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of deepfake technology (an AI system) to create non-consensual explicit images, which are then distributed to harm and exploit victims. This constitutes a violation of human rights and personal privacy, fulfilling the criteria for an AI Incident. The harm is realized and ongoing, with multiple victims affected, including minors. Therefore, this is classified as an AI Incident.
Thumbnail Image

【网传学生散播合成不雅照牟利】受害者促勿再转发涉案者照片 避免演变网络霸凌

2025-04-09
東方網 馬來西亞東方日報
Why's our monitor labelling this an incident or hazard?
The incident explicitly involves the use of AI-based deepfake technology to create non-consensual synthetic explicit images, which were distributed and monetized, causing direct harm to the victims. This constitutes a violation of rights and harm to individuals, fitting the definition of an AI Incident. The involvement of AI in generating the synthetic images and the resulting harm is clear and direct.
Thumbnail Image

【网传学生散播合成不雅照牟利】柔总警长证实逮捕16岁涉案少年 延扣4天助查

2025-04-09
東方網 馬來西亞東方日報
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI applications to synthesize faces onto nude photos, creating non-consensual explicit images that were then distributed and sold online. This use of AI directly caused harm to the victims' rights and dignity, fulfilling the criteria for an AI Incident under violations of human rights and harm to communities. The arrest and ongoing investigation confirm that harm has occurred, not just a potential risk. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

【网传学生散播合成不雅照牟利】涉案男生疑发文道歉后再删除

2025-04-09
東方網 馬來西亞東方日報
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI to generate synthetic explicit images and videos that were shared and sold, causing serious harm to the victims' privacy and reputation. This fits the definition of an AI Incident because the AI system's use directly led to violations of fundamental rights and harm to individuals. The student's admission and the described consequences confirm the harm has occurred, not just a potential risk. Therefore, this is classified as an AI Incident.
Thumbnail Image

【网传学生散播合成不雅照牟利】张念群估逾40人受害最小仅14岁

2025-04-09
東方網 馬來西亞東方日報
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of deepfake technology, which is an AI system capable of generating synthetic images. The malicious creation and distribution of these deepfake non-consensual explicit images have directly caused harm to the victims, including privacy violations and psychological trauma. The involvement of AI in the creation of harmful content and the resulting direct harm to individuals qualifies this event as an AI Incident under the framework, specifically under violations of human rights and harm to individuals.
Thumbnail Image

【网传学生散播合成不雅照牟利】宽中女生控诉逾200人群组意淫 向老师反映竟遭二次伤害

2025-04-09
東方網 馬來西亞東方日報
Why's our monitor labelling this an incident or hazard?
The event explicitly describes the use of photo synthesis technology to create non-consensual explicit images, which is a known AI application (deepfake). The images were distributed for profit, causing direct harm to the victims' rights and dignity. The involvement of AI in generating these images and the resulting harm to individuals meets the criteria for an AI Incident under violations of human rights and harm to communities. The event also includes law enforcement action, confirming the harm has materialized.
Thumbnail Image

深伪并兜售女同学不雅照片 古来中学生被捕

2025-04-09
早报
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI deepfake technology to create non-consensual explicit images, which were then distributed and sold online, causing direct harm to the victims. The AI system's use here is malicious and leads to violations of personal rights and privacy, which are harms under the AI Incident definition. The harm is realized, not just potential, and the AI system's role is pivotal in generating the harmful content. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

照片遭盗用合成不雅照 张念群促受害者勇敢报案求助 | The Malaysian Insight

2025-04-09
themalaysianinsight.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI deepfake technology to create and spread non-consensual explicit images, which constitutes a violation of privacy and personal rights, causing psychological and reputational harm to victims. This harm is realized and ongoing, meeting the criteria for an AI Incident. The involvement of AI in the malicious use of deepfake technology directly leads to harm to individuals and communities, fulfilling the definition of an AI Incident under violations of human rights and harm to communities.
Thumbnail Image

马来西亚一少年涉嫌变造不雅照片出售 受害者或达数十人

2025-04-09
China News
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system to generate manipulated explicit images (deepfakes) of real individuals without consent, which has led to direct harm to the victims' privacy, reputation, and potentially psychological health. The AI system's use in creating and distributing these images is central to the harm caused. The harm is realized, with multiple victims reporting to the police, including minors, indicating serious violations of rights and harm to communities. Therefore, this qualifies as an AI Incident.
Thumbnail Image

学生AI裸照案 张念群:学校应按SOP认真处理投诉

2025-04-12
cincainews.com
Why's our monitor labelling this an incident or hazard?
The event describes a case where AI-generated deepfake technology was used maliciously to create explicit content without consent, directly harming the victims (violation of rights and harm to individuals). The involvement of AI in the creation of these images and videos is explicit, and the harm is realized, not just potential. Therefore, this qualifies as an AI Incident. The article also discusses the inadequate response by schools, emphasizing the need for proper handling of such AI-related harms.
Thumbnail Image

【网传学生散播合成不雅照牟利】深伪合成不雅照流传 受害者代表:第一时间召集受害者报警

2025-04-12
東方網 馬來西亞東方日報
Why's our monitor labelling this an incident or hazard?
The event describes the malicious use of AI-based deepfake technology to synthesize explicit images of students without their consent and distribute them, causing direct harm to the victims. This fits the definition of an AI Incident because the AI system's use has directly led to violations of human rights and harm to individuals and communities. The victims' response, including reporting to authorities, further confirms the realized harm. Therefore, this event is classified as an AI Incident.