Students Use AI to Create Obscene Image of Teacher

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Two class IX students in Moradabad, UP, have been booked for using AI tools to create and post a morphed obscene image of their female teacher on social media. The incident led to an FIR under the IT Act, and police are investigating while working to remove the image from the internet.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article describes a realized harm caused by the misuse of AI tools: two minors used AI to create and post a fake obscene image of their teacher, infringing her rights and causing reputational and psychological harm. This constitutes a direct AI Incident under the framework.[AI generated]
AI principles
Privacy & data governanceRespect of human rightsSafetyHuman wellbeingAccountability

Industries
Education and trainingMedia, social platforms, and marketing

Affected stakeholders
WomenWorkers

Harm types
ReputationalPsychologicalHuman or fundamental rights

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Case Against UP School Students For Posting AI Generated Obscene Image Of Teacher

2024-09-28
NDTV
Why's our monitor labelling this an incident or hazard?
The article describes a realized harm caused by the misuse of AI tools: two minors used AI to create and post a fake obscene image of their teacher, infringing her rights and causing reputational and psychological harm. This constitutes a direct AI Incident under the framework.
Thumbnail Image

2 school students booked for posting AI generated obscene image of teacher in Uttar Pradesh

2024-09-28
Deccan Herald
Why's our monitor labelling this an incident or hazard?
The misuse of AI image-generation tools directly led to a harm—violation of the teacher’s rights and reputational/psychological harm—prompting an FIR. This is a realized event where AI use caused wrongdoing, fitting the definition of an AI Incident.
Thumbnail Image

UP: 2 Class 9th Students Booked For Posting AI-Generated Obscene Images Of Teacher On Social Media Platforms

2024-09-29
Free Press Journal
Why's our monitor labelling this an incident or hazard?
Minors used AI systems to generate and post fake obscene images of their teacher, directly leading to a violation of her fundamental rights (privacy, dignity) and inflicting harm. This is a realized AI-related harm, fitting the definition of an AI Incident.
Thumbnail Image

Students Booked for AI-Generated Obscene Teacher Image | Law-Order

2024-09-28
Devdiscourse
Why's our monitor labelling this an incident or hazard?
Students used AI-based image‐morphing tools to create and post a fake obscene image of their teacher, resulting in police action and an FIR under the IT Act. This is a realized harm (reputational and psychological) directly caused by misuse of an AI system for illicit content creation, constituting an AI incident.
Thumbnail Image

Latest News | UP: 2 School Students Booked for Posting AI Generated Obscene Image of Teacher | LatestLY

2024-09-28
LatestLY
Why's our monitor labelling this an incident or hazard?
The incident involves the malicious use of an AI system (online AI image tools) to produce defamatory and non-consensual intimate imagery, causing direct harm to the teacher’s rights and resulting in police action. This is a realized harm from AI misuse, qualifying it as an AI Incident.
Thumbnail Image

UP: 2 school students booked for posting AI generated obscene image of teacher

2024-09-28
NewsDrum
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (online AI tools) to generate harmful content (morphed obscene image) that was posted online, causing harm to the teacher's rights and dignity. This constitutes a violation of rights and harm to an individual, directly linked to the use of AI, thus qualifying as an AI Incident.
Thumbnail Image

यूपी: नौवीं के छात्रों ने AI से बनाया अपनी टीचर का अश्लील फोटो, फिर सोशल मीडिया पर शेयर कर दिया

2024-09-29
LallanTop - News with most viral and Social Sharing Indian content on the web in Hindi
Why's our monitor labelling this an incident or hazard?
The event involves the direct malicious use of an AI system (deepfake generation) to commit fraud, causing material harm (financial loss) and consequent legal action. This is a realized harm attributable to an AI system, classifying it as an AI Incident.
Thumbnail Image

टीचर के साथ ऐसा गंदा काम... 2 स्टूडेंट ने AI से महिला शिक्षिका की बनाई अश्लील तस्वीर, फिर जो किया... - Lalluram

2024-09-28
लल्लूराम
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system to generate harmful content (obscene images) that was then shared publicly, directly causing harm to the individual depicted (violation of rights and privacy). The AI system's use here is central to the incident, and the harm has already occurred, meeting the criteria for an AI Incident.
Thumbnail Image

देश की खबरें | शिक्षक की एआई से बनायी अश्लील फोटो प्रसारित करने के आरोप में दो स्कूली छात्रों पर मामला दर्ज | LatestLY हिन्दी

2024-09-28
LatestLY हिन्दी
Why's our monitor labelling this an incident or hazard?
The article explicitly states that two students used AI to generate obscene images of a teacher and shared them on social media, leading to legal action. The AI system's use directly led to harm in the form of violation of the teacher's rights and reputational damage, fulfilling the criteria for an AI Incident under violations of human rights and harm to communities.
Thumbnail Image

9वीं क्लास के दो छात्रों ने AI के जरिए महिला टीचर की बनाई अश्लील तस्वीर, सोशल मीडिया पर किया वायरल

2024-09-28
आज तक
Why's our monitor labelling this an incident or hazard?
The AI system was used in the development and use phases to create and disseminate harmful content. The harm is a violation of the teacher's rights and reputational harm, which falls under violations of human rights or breach of applicable laws protecting fundamental rights. Since the harm has already occurred and legal action is underway, this qualifies as an AI Incident rather than a hazard or complementary information.