Police Investigate Deepfake Nude Photos at Singapore Sports School

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Police are investigating the creation and distribution of deepfake nude photos of Singapore Sports School students, allegedly made by fellow students. The school has launched an investigation, lodged a police report, and is offering counseling to affected students. Parents and female teachers were also targeted, prompting multiple police reports.[AI generated]

Why's our monitor labelling this an incident or hazard?

Students used AI-based deepfake technology to generate and circulate nude images of their peers. This misuse of AI directly led to harm (privacy violation, emotional distress, potential child exploitation), and police are investigating the incident. It therefore qualifies as an AI Incident.[AI generated]
AI principles
Privacy & data governanceRespect of human rightsSafetyHuman wellbeingAccountabilityRobustness & digital security

Industries
Education and trainingMedia, social platforms, and marketingDigital security

Affected stakeholders
ChildrenWorkersWomenGeneral public

Harm types
PsychologicalReputationalHuman or fundamental rights

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Singapore police probing deepfake nude images of sports school students shared within campus

2024-11-12
Malay Mail
Why's our monitor labelling this an incident or hazard?
Students used AI-based deepfake technology to generate and circulate nude images of their peers. This misuse of AI directly led to harm (privacy violation, emotional distress, potential child exploitation), and police are investigating the incident. It therefore qualifies as an AI Incident.
Thumbnail Image

Police investigating deepfake nude photos of Singapore Sports School students

2024-11-12
The Straits Times
Why's our monitor labelling this an incident or hazard?
Deepfake generation is a misuse of AI technology that directly led to violations of students’ and teachers’ privacy and dignity, causing realized harm. This meets the definition of an AI Incident because the AI system’s use directly produced non-consensual harmful content.
Thumbnail Image

Cops probe deepfake nude photos of sports school kids

2024-11-12
The Star
Why's our monitor labelling this an incident or hazard?
Students used AI-enabled deepfake tools to generate and distribute explicit, non-consensual images of classmates and teachers, constituting a realized harm—violation of privacy, sexual exploitation, and psychological damage—directly attributable to AI misuse.
Thumbnail Image

AsiaOne

2024-11-12
AsiaOne
Why's our monitor labelling this an incident or hazard?
The incident involves the use of a deepfake AI system to produce and distribute non-consensual explicit images, directly leading to violations of personal rights and emotional distress for victims. It is an actual event where the AI system’s outputs have caused harm, fitting the definition of an AI Incident.
Thumbnail Image

Police investigate deepfake nude photos of Singapore Sports School students created by peers - The Online Citizen

2024-11-12
The Online Citizen
Why's our monitor labelling this an incident or hazard?
The event explicitly involves deepfake images, which are generated using AI systems capable of manipulating images to create realistic but fake content. The creation and sharing of these images have directly led to harm to the students' privacy, safety, and well-being, fulfilling the criteria for an AI Incident. The involvement of AI in generating the harmful content and the realized harm to individuals justifies this classification.
Thumbnail Image

Police investigate deepfake nude photos of Singapore Sports School students created by peers

2024-11-12
The Online Citizen
Why's our monitor labelling this an incident or hazard?
The event explicitly involves deepfake technology, an AI system capable of generating realistic manipulated images. The creation and distribution of these deepfake nude photos have directly led to harm, including violations of privacy and safety of students, which constitute a breach of fundamental rights. The involvement of AI in generating the harmful content and the resulting real-world consequences classify this as an AI Incident under the framework.
Thumbnail Image

Police investigating deepfake nude photos of Singapore Sports School students

2024-11-12
CNA
Why's our monitor labelling this an incident or hazard?
Deepfake generation involves an AI system that has directly harmed the privacy and rights of the students by creating and sharing non-consensual sexual images. This is a realized incident causing psychological and reputational harm, so it qualifies as an AI Incident.