IIIT Raipur Student Uses AI to Create Fake Explicit Images of 36 Female Students

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A third-year engineering student at IIIT Raipur, India, used AI tools to generate fake explicit images and videos of 36 female classmates by morphing their social media photos. The incident led to the student's suspension, police involvement, and an internal investigation, causing significant distress among the victims.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves the use of AI systems (AI-based image creation and editing apps) to produce harmful, obscene fake images of students, which directly violates the rights and dignity of the affected individuals. This constitutes a breach of fundamental rights and personal privacy, fulfilling the criteria for an AI Incident. The harm has already occurred, with police action and institutional suspension following the misuse of AI.[AI generated]
AI principles
Privacy & data governanceRespect of human rightsSafetyAccountability

Industries
Education and training

Affected stakeholders
Women

Harm types
PsychologicalReputationalHuman or fundamental rights

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Raipur IIIT College News: रायपुर IIIT में छात्राओं की अश्लील AI तस्वीर बनाने वाला स्टूडेंट सैयद रहीम अदनान गिरफ्तार.. इंस्टिट्यूट से पहले ही सस्पेंड..

2025-10-10
IBC24 News : Chhattisgarh News, Madhya Pradesh News, Chhattisgarh News Live , Madhya Pradesh News Live, Chhattisgarh News In Hindi, Madhya Pradesh In Hindi
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (AI-based image creation and editing apps) to produce harmful, obscene fake images of students, which directly violates the rights and dignity of the affected individuals. This constitutes a breach of fundamental rights and personal privacy, fulfilling the criteria for an AI Incident. The harm has already occurred, with police action and institutional suspension following the misuse of AI.
Thumbnail Image

IIIT Raipur AI Cybercrime: CG IIIT की 36 छात्राओं की AI से बनाया अश्लील तस्वीर, संस्था में पढ़ने वाला सैयद रहीम अली गिरफ्तार...शिकायत सामने आने के बाद CM ने लिए था संज्ञान | IIIT Raipur AI Cybercrime: AI-generated pornographic images of 36 female students of CG IIIT were created, and Syed Rahim Ali, a student at the institute, was arrested. The Chief Minister took cognizance of the complaint.

2025-10-09
npg.news
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI to generate non-consensual pornographic images, which constitutes a violation of human rights and causes harm to the individuals depicted. The AI system's use in this malicious manner directly led to harm, fulfilling the criteria for an AI Incident. The arrest and police action further confirm the realization of harm rather than a potential risk.
Thumbnail Image

छात्राओं की तस्वीरों को एआई की मदद से छेड़छाड़ करने के आरोप में आईआईआईटी छात्र गिरफ्तार

2025-10-09
IBC24 News : Chhattisgarh News, Madhya Pradesh News, Chhattisgarh News Live , Madhya Pradesh News Live, Chhattisgarh News In Hindi, Madhya Pradesh In Hindi
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI tools to alter images in a harmful way, resulting in social and psychological harm to the victims. This fits the definition of an AI Incident as the AI system's use directly led to harm to persons and violation of rights. Although the images have not been confirmed as shared online, the creation and possession of manipulated obscene images already constitute harm. Therefore, this is classified as an AI Incident.
Thumbnail Image

IIT का छात्र AI से बना रहा था छात्राओं की अश्लील तस्वीरें, पुलिस ने किया गिरफ्तार - News24 Hindi

2025-10-10
News24 Hindi
Why's our monitor labelling this an incident or hazard?
The use of AI tools to create fake and obscene images directly leads to harm by violating the privacy and dignity of the individuals involved, which falls under violations of human rights and causes harm to communities. Since the AI system's use has directly led to this harm and legal action is underway, this qualifies as an AI Incident.
Thumbnail Image

AI टूल का इस्तेमाल करके इंजीनियरिंग स्टूडेंट ने बनाई 36 छात्राओं की गंदी फोटो, अब सस्पेंड

2025-10-08
ऑपइंडिया
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (AI tool) to generate fake explicit images of individuals without their consent, which constitutes a violation of human rights and causes harm to the affected students. The harm has already occurred as the fake photos were created and distributed, leading to institutional and police action. Therefore, this qualifies as an AI Incident due to the direct involvement of AI in causing harm to individuals' rights and dignity.
Thumbnail Image

छत्तीसगढ़ में IT स्टूडेंट ने AI से बनाई 36 छात्राओं की अश्लील तस्वीरें, 1000 से ज्यादा वीडियो भी बरामद

2025-10-08
hindi.moneycontrol.com
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI to create manipulated obscene images and videos of students without their consent, which constitutes a violation of human rights and privacy. The harm is direct and materialized, as the students' images were used to produce harmful content, and the incident has led to police investigation and institutional disciplinary action. This fits the definition of an AI Incident because the AI system's use directly led to violations of rights and harm to individuals.
Thumbnail Image

टेक्नोलॉजी का दुरुपयोग: छात्र ने AI से 36 लड़कियों की बनाई अश्लील तस्वीरें, सजा सुन चौंक गए लोग

2025-10-08
India News, Breaking News, Entertainment News | India.com
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI tools to create explicit morphed images and videos without consent, which is a clear violation of human rights and privacy. The harm is direct and realized, affecting the dignity and rights of the victims. The AI system's use in generating such content is central to the incident, fulfilling the criteria for an AI Incident under violations of human rights and breach of applicable laws protecting fundamental rights. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

रायपुर IIIT के स्टूडेंट का AI की मदद से गंदा काम, 36 छात्राओं की बनाई अश्लील तस्वीरें

2025-10-08
Hindustan
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to generate non-consensual explicit content, which constitutes a violation of human rights and privacy. The harm is realized as the creation and possession of such images directly harms the individuals involved. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's use in generating harmful content.
Thumbnail Image

IT इंस्टीट्यूट के छात्र ने AI इस्तेमाल कर 36 लड़कियों के बनाए गंदे फोटो, हुआ यह अंजाम

2025-10-08
Asianet News Network Pvt Ltd
Why's our monitor labelling this an incident or hazard?
The event describes a student using AI tools to create explicit fake photos of multiple individuals without their consent, which is a direct violation of their rights and causes harm to the individuals involved. The AI system's use here is central to the harm caused, fulfilling the criteria for an AI Incident as it involves realized harm to persons and communities through misuse of AI-generated content.
Thumbnail Image

IIIT रायपुर के छात्र ने किया AI का घिनौना इस्तेमाल: 36 छात्राओं की फेक अश्लील तस्वीरें बनाई, कालेज से निकाला गया | IIIT Raipur student misused AI create fake image 36 female got suspend | Hari Bhoomi

2025-10-08
हरिभूमि
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI technology to generate fake explicit images of individuals without their consent, which constitutes a violation of human rights and causes harm to the affected individuals. The AI system's misuse directly led to these harms, fulfilling the criteria for an AI Incident. The article describes realized harm (mental distress, privacy violation) caused by the AI misuse, not just potential harm.
Thumbnail Image

IIIT-Raipur के छात्र की शर्मनाक करतूत, AI से बनाई छात्राओं की अश्लील फोटो, संस्थान ने किया निलंबित... - Lalluram

2025-10-08
लल्लूराम
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI to create non-consensual explicit fake content of individuals, which is a clear violation of human rights and causes harm to the affected students. The AI system's misuse directly led to this harm, fulfilling the criteria for an AI Incident under violations of rights and harm to communities. The institution's response and investigation do not negate the fact that harm has occurred due to AI misuse.
Thumbnail Image

Raipur IIIT News: रायपुर के ट्रिपल IT में युवतियों की अश्लील तस्वीरें बनाने का मामला.. डायरेक्टर OP व्यास के दफ्तर पहुंची पुलिस, आरोपी छात्र सस्पेंड..

2025-10-08
IBC24 News : Chhattisgarh News, Madhya Pradesh News, Chhattisgarh News Live , Madhya Pradesh News Live, Chhattisgarh News In Hindi, Madhya Pradesh In Hindi
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system to generate non-consensual explicit content of individuals, which is a clear violation of human rights and privacy (a breach of obligations under applicable law). The harm has already occurred as the images/videos were created and there is concern about potential sharing or distribution. The AI system's use directly led to this harm, qualifying this as an AI Incident under the framework.
Thumbnail Image

Raipur Crime News: रायपुर ट्रिपल IT संस्थान में 36 छात्राओं की तस्वीरों से अश्लील छेड़छाड़, CM साय ने कही सख्त कार्रवाई की बात

2025-10-08
IBC24 News : Chhattisgarh News, Madhya Pradesh News, Chhattisgarh News Live , Madhya Pradesh News Live, Chhattisgarh News In Hindi, Madhya Pradesh In Hindi
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI to alter photos of 36 female students into obscene images, which is a clear violation of their rights and causes harm to their personal dignity and privacy. The AI system's use here is malicious and has directly led to harm. This fits the definition of an AI Incident because the AI system's use has directly led to harm to individuals (violation of rights and harm to communities). The involvement of institutional and police responses further confirms the seriousness of the incident.
Thumbnail Image

IIIT रायपुर में AI से छात्राओं की तस्वीरों की अश्लील मॉर्फिंग, आरोपी छात्र फरार, CM ने लिया संज्ञान

2025-10-08
आज तक
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI tools to manipulate images in a harmful way, leading to violations of privacy and cyber exploitation, which are breaches of fundamental rights. The AI system's use directly caused harm to individuals, fulfilling the criteria for an AI Incident. The involvement of AI in generating the obscene content and the resulting harm to the victims' rights and dignity clearly classifies this as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

रायपुर: AI से छात्राओं की अश्लील तस्वीरें, IIIT में हड़कंप, छात्र निलंबित

2025-10-09
hindi
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI technology to create obscene images and videos of students without their consent, which is a clear violation of their rights and causes harm to the individuals involved. The AI system's misuse directly led to this harm, fulfilling the criteria for an AI Incident. The involvement of AI in generating harmful content and the resulting disciplinary and institutional response confirm this classification.
Thumbnail Image

AI का मिसयूज करने वाला IIIT का छात्र गिरफ्तार, 36 छात्राओं की बनाई थी अश्लील फोटो - iiit student arrested for aigenerated obscene images of 36 students

2025-10-09
दैनिक जागरण (Dainik Jagran)
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI to generate obscene images, which directly led to harm to the privacy and dignity of the students involved. This is a clear violation of human rights and applicable laws protecting privacy and against obscene content distribution. The AI system's misuse by the accused student is central to the incident, fulfilling the criteria for an AI Incident as the harm has already occurred and legal action is being taken.
Thumbnail Image

रायपुर ट्रिपल IT में छात्र ने AI से बनाई 36 छात्राओं की अश्लील फोटो! जानें खाैफनाक घटना का काला सच

2025-10-09
Asianet News Network Pvt Ltd
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI technology was used to create fake obscene photos of students, which is a direct violation of their privacy and dignity, falling under violations of human rights. The AI system's use here is malicious and has directly led to harm to the students. The involvement of AI in generating harmful content and the resulting harm to individuals' rights and well-being clearly meets the criteria for an AI Incident. The management's failure to promptly report to police and concerns about potential viral spread further underscore the seriousness of the harm.
Thumbnail Image

छत्तीसगढ़ में IIIT के छात्र ने AI की मदद से बनाई 36 छात्राओं की अश्लील तस्वीर, अरेस्ट - India TV Hindi

2025-10-10
India TV Hindi
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI tools to create fake obscene images, which is a misuse of AI technology causing harm to individuals' rights and dignity. This constitutes a violation of human rights and applicable laws protecting those rights. Since the harm has already occurred and legal action is underway, this qualifies as an AI Incident under the framework.
Thumbnail Image

UNIVARTA

2025-10-09
http://www.univarta.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (AI-generated images/videos) to create harmful content (obscene images/videos) targeting individuals, which constitutes a violation of rights and causes harm to the affected individuals. The AI system's use directly led to harm and legal action, qualifying this as an AI Incident.
Thumbnail Image

AI के जरिए बनाई छात्राओं की अश्लील तस्वीर: IIIT-Raipur के छात्र के खिलाफ पुलिस ने दर्ज किया FIR - Lalluram

2025-10-09
लल्लूराम
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI to generate obscene images and videos of students, which is a direct misuse of AI technology causing harm to individuals' rights and dignity. The involvement of AI in creating non-consensual explicit content is a clear violation of human rights and privacy laws, fulfilling the criteria for an AI Incident. The police FIR and institutional actions confirm that harm has occurred and is being addressed legally.
Thumbnail Image

IIT AI Obscene Photo Case : एआई से छात्राओं की बनाई अश्लील फोटो, पुलिस ने आरोपी छात्र को बिलासपुर से दबोचा - Lalluram

2025-10-10
लल्लूराम
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI was used to create obscene photos of students, which is a direct violation of their rights and privacy, constituting harm to individuals. The involvement of AI in generating harmful content that led to police action and legal charges fits the definition of an AI Incident, as the AI system's use directly led to harm (violation of rights and potential psychological harm). The event is not merely a potential risk but a realized harm, thus qualifying as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

AI से 36 लड़कियों के अश्लील फोटो बनाए, IIIT रायपुर के छात्र का 'गंदा' काम; CM ने लिया संज्ञान

2025-10-09
TV9 Bharatvarsh
Why's our monitor labelling this an incident or hazard?
The use of AI tools to create non-consensual obscene images of individuals is a direct misuse of AI technology causing harm to the victims' rights and dignity, fitting the definition of an AI Incident. The AI system's use here directly led to violations of human rights and harm to individuals, fulfilling the criteria for an AI Incident rather than a hazard or complementary information. The involvement of AI in generating harmful content and the resulting cyber exploitation clearly qualifies this as an AI Incident.
Thumbnail Image

AI का गलत इस्तेमाल: छात्र ने 35 छात्राओं की अश्लील तस्वीरें बनाईं, कॉलेज में हड़कंप!

2025-10-09
आज तक
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (an AI tool) to generate harmful content (obscene images) targeting individuals, leading to harm to the affected students and disruption within the community. This constitutes a violation of rights and harm to communities, fitting the definition of an AI Incident due to the direct harm caused by the AI system's misuse.
Thumbnail Image

AI से 36 छात्राओं की अश्लील तस्वीरें बनाने वाले छात्र के खिलाफ FIR, आईटी एक्ट समेत कई धाराओं में केस दर्ज

2025-10-09
आज तक
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI tools to manipulate images of students into obscene content, which is a direct misuse of AI technology causing harm to individuals' privacy and dignity. This constitutes a violation of rights under applicable laws and is a clear AI Incident as the AI system's use directly led to harm.
Thumbnail Image

IIT छात्र ने AI से बनाई कई लड़कियों की अश्लील तस्वीरें, पुलिस ने किया गिरफ्तार - iiit chhattisgarh student created obscene images of female students using ai arrested

2025-10-10
दैनिक जागरण (Dainik Jagran)
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system to generate obscene images of female students without their consent. This misuse directly harms the individuals involved by violating their privacy and dignity, which falls under violations of human rights and fundamental rights. The police arrest and institutional suspension indicate the harm has been realized. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's use.
Thumbnail Image

Crime News: एआई टूल से अश्लील रील बनाता था आईआईआईटी का छात्र, 32 छात्राएं बनीं शिकार, जानें पूरा मामला

2025-10-10
Patrika News
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI tools to generate harmful content (obscene images and reels) without consent, directly violating the privacy and rights of the victims, which constitutes a breach of fundamental and possibly labor rights. This harm has already occurred, making it an AI Incident under the framework.
Thumbnail Image

रायपुर IIIT के स्टूडेंट ने AI से बनाई छात्राओं की अश्लील तस्वीरें, लैपटॉप, मोबाइल देखकर दंग रह गई पुलिस | 🇮🇳 LatestLY हिन्दी

2025-10-10
LatestLY हिन्दी
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI tools to create obscene images, which is a misuse of AI technology causing harm to individuals' rights and dignity. This falls under violations of human rights and breach of legal protections. Since the harm has already occurred and legal action is underway, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Chhattisgarh: IIIT Raipur student held for morphing photos of fellow girls using AI tools | Raipur News - The Times of India

2025-10-09
The Times of India
Why's our monitor labelling this an incident or hazard?
The student used AI image generation and editing tools to create morphed obscene photos of at least 36 girls, which is a direct misuse of AI technology causing harm to individuals' privacy and dignity, violating their rights. The involvement of AI tools in the creation of harmful content and the resulting police action and suspension confirm the direct link to harm. Hence, this is an AI Incident.
Thumbnail Image

IT Student Uses AI To Create Porn Pics Of 36 Women Students, Suspended

2025-10-08
NDTV
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI was used to create morphed pornographic images of women students, which is a clear violation of their rights and privacy, fulfilling the criteria for harm under violations of human rights and breach of obligations to protect fundamental rights. The use of AI in this malicious manner directly caused harm to the victims. Therefore, this qualifies as an AI Incident.
Thumbnail Image

IT Student Uses AI To Make Obscene Images Of Over 30 Women Classmates; Faces Action

2025-10-08
News18
Why's our monitor labelling this an incident or hazard?
The use of AI to create non-consensual obscene images of individuals is a clear violation of human rights and privacy, fitting the definition of an AI Incident under category (c) violations of human rights or breach of obligations intended to protect fundamental rights. The AI system's role in generating the harmful content is direct and pivotal to the harm experienced by the victims. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Chhattisgarh IT Student Uses AI To Morph Porn Pics Of 36 Classmates, Suspended

2025-10-08
Zee News
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI was used to create obscene morphed images of 36 women students, which is a clear violation of their rights and privacy, thus constituting harm under the definition of an AI Incident. The use of AI in this malicious manner directly led to harm to the individuals involved. Therefore, this event qualifies as an AI Incident due to the direct harm caused by the AI system's use.
Thumbnail Image

IIIT Naya Raipur Orders Probe After Student Accused Of Creating Obscene AI-Images Of Females

2025-10-08
ETV Bharat News
Why's our monitor labelling this an incident or hazard?
The event describes a student using AI to morph images of female students into obscene photos and videos, which is a direct violation of their rights and privacy. The use of AI to create such harmful content constitutes an AI Incident as it has directly led to harm to individuals (violation of rights and harm to communities). The incident is not merely a potential risk but an actual occurrence with tangible harm, thus qualifying as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Chhattisgarh: IIIT Naya Raipur Student Suspended for Making AI-Generated Obscene Images of Female Classmates News24 -

2025-10-08
News24
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI to create morphed obscene images and videos of female students, which is a direct violation of their rights and causes harm to them. The AI system's use in this context has directly led to harm (violation of rights and harm to individuals), fulfilling the criteria for an AI Incident. The involvement of AI in generating the harmful content and the resulting disciplinary and legal actions confirm this classification.
Thumbnail Image

Student Used AI To Create Pornographic Images Of Classmates, Arrested

2025-10-09
NDTV
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of advanced AI software to morph photographs into obscene content, which directly harms the privacy and dignity of the victims, constituting a violation of rights under applicable law. The harm has already occurred as the images were created and stored, and legal proceedings have been initiated. This fits the definition of an AI Incident because the AI system's use directly led to harm to persons and violation of rights.
Thumbnail Image

Chhattisgarh engineering college student 'uses AI' to make women's obscene pics, suspended

2025-10-09
Hindustan Times
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system to create digitally morphed obscene images of women without their consent, which is a violation of their rights and causes harm to the individuals. This fits the definition of an AI Incident because the AI system's use directly led to harm to persons (violation of rights and personal harm). The involvement of AI is explicit, and the harm is realized, not just potential. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

IIIT Raipur student morphs classmates' photos with AI, expelled from college

2025-10-09
India Today
Why's our monitor labelling this an incident or hazard?
The use of AI to morph photos into obscene visuals is a direct misuse of AI-generated content causing harm to individuals' rights and dignity, which falls under violations of human rights and personal rights. The AI system's use directly led to harm, making this an AI Incident. The article describes realized harm and institutional response, not just potential harm or general AI news.
Thumbnail Image

MP: Engineering Student Found Generating Obscene Pics Of Females Using AI, Suspended

2025-10-09
Deccan Chronicle
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system to generate non-consensual obscene images and videos of individuals, which is a direct violation of their rights and privacy, thus constituting harm to persons. The AI system's use in creating such content has directly led to harm, fulfilling the criteria for an AI Incident. The investigation and seizure of devices confirm the involvement of AI-generated content causing harm.
Thumbnail Image

IIIT Raipur student held for creating AI-generated obscene images of female batchmates

2025-10-09
The Statesman
Why's our monitor labelling this an incident or hazard?
The incident explicitly involves the use of AI-based image generation and editing tools to create harmful and obscene content targeting individuals, leading to mental and social harm. This fits the definition of an AI Incident as the AI system's use directly caused harm to persons and violated their rights. The event is not merely a potential risk or a complementary update but a realized harm caused by AI misuse.
Thumbnail Image

IIIT Naya Raipur student arrested for ceating objectionable AI-generated images of 36 female classmates

2025-10-10
The Telegraph
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly used to generate and edit images in a way that led to harm—specifically, the creation of obscene images of individuals without consent, which is a violation of their rights and personal dignity. The incident has resulted in legal action and institutional response, indicating realized harm. Therefore, this qualifies as an AI Incident due to the direct involvement of AI in causing harm to persons through violation of rights and potential psychological harm.
Thumbnail Image

IIIT Raipur student arrested for creating 1,000+ obscene AI-morphed images of female classmates

2025-10-10
India Today
Why's our monitor labelling this an incident or hazard?
The creation of AI-morphed obscene images directly harms the privacy and dignity of the individuals involved, constituting a violation of human rights and privacy laws. The use of AI to generate such content is a clear example of AI misuse causing realized harm. Therefore, this event qualifies as an AI Incident due to the direct harm caused by the AI system's malicious use.
Thumbnail Image

Chhattisgarh: IIIT student arrested for creating obscene images of female students using AI tools

2025-10-10
Asian News International (ANI)
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (AI-based image generation and editing tools) to create harmful content (obscene fake images) targeting individuals, which is a violation of rights and causes harm to the affected persons. The harm has materialized as the police have arrested the accused following a complaint, indicating direct harm caused by the AI system's misuse.
Thumbnail Image

Chhattisgarh: IIIT student held for using AI tools to create obscene images of female students

2025-10-09
NewsDrum
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI tools to create harmful, obscene images of individuals without consent, which constitutes a violation of rights and causes psychological and social harm. The AI system's use is central to the incident, as it enabled the creation of these images. This meets the criteria for an AI Incident due to realized harm linked to AI use.
Thumbnail Image

IIIT Student Held In Raipur For Using AI Tools To Create Obscene Images Of 36 Female Students

2025-10-10
ETV Bharat News
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (AI image generation and editing tools) to create harmful content (obscene images) targeting individuals, which is a violation of their rights and causes harm to the affected community. The AI system's use directly led to harm through the creation and distribution of objectionable content, fulfilling the criteria for an AI Incident under violations of human rights and harm to communities.
Thumbnail Image

2nd-yr IIIT student used AI to create obscene images of 36 women students; arrested: Cop

2025-10-09
Hindustan Times
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems for generating obscene morphed images, which directly caused psychological and social harm to the victims, constituting harm to individuals and communities. The AI system's use in creating non-consensual explicit content is a violation of rights and has led to an AI Incident as per the definitions provided. Although the images were not circulated, the harm from their creation and possession is realized.
Thumbnail Image

Chhattisgarh: IIIT Student Creates Obscene Images Of Female Students Using AI Tools, Arrested

2025-10-10
Asianet News Network Pvt Ltd
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI tools to create fake and objectionable images, which directly harms the privacy and dignity of the female students involved. This is a clear case of an AI system's use leading to violations of rights and harm to individuals, fitting the definition of an AI Incident.
Thumbnail Image

Chhattisgarh Police arrest IIIT student for creating obscene images of 36 female students

2025-10-09
The New Indian Express
Why's our monitor labelling this an incident or hazard?
The accused used AI to generate pornographic images without consent, directly violating the rights of the female students and causing harm to them. The AI system's use in creating these images is central to the incident, and the harm is realized. Therefore, this qualifies as an AI Incident under violations of human rights and harm to individuals.
Thumbnail Image

Chhattisgarh IT student arrested for using AI tools to create obscene images of female students - The Tribune

2025-10-09
The Tribune
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI tools to create harmful, obscene images without consent, which constitutes a violation of rights and causes psychological and social harm to the affected individuals and their community. The AI system's use is central to the harm caused, fulfilling the criteria for an AI Incident under violations of human rights and harm to communities.
Thumbnail Image

AI Misconduct Shakes IIIT Chhattisgarh: Student Arrested | Technology

2025-10-09
Devdiscourse
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI tools to create harmful content (obscene morphed images), which caused emotional and social harm to the victims. This constitutes a violation of rights and harm to individuals, fitting the definition of an AI Incident. The AI system's use directly led to the harm, even though the images were not shared, the creation itself caused significant harm. Therefore, this is classified as an AI Incident.
Thumbnail Image

AI Misuse Sparks Controversy at Raipur Institute | Headlines

2025-10-10
Devdiscourse
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI-powered tools to create objectionable images, which is a direct misuse of AI systems causing harm to individuals (violation of rights and harm to communities). The student's actions led to suspension and legal consequences, indicating realized harm. Therefore, this qualifies as an AI Incident due to the direct link between AI misuse and harm.
Thumbnail Image

AI Misuse: IIIT Naya Raipur Student Arrested for Creating Fake Obscene Images of Female Students Using AI Tools | 📰 LatestLY

2025-10-10
LatestLY
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (AI-based image generation and editing tools) to create harmful content (fake obscene images) that directly harms individuals' rights and dignity. The misuse of AI in this way has led to realized harm and legal consequences, fitting the definition of an AI Incident due to violation of rights and harm to persons.
Thumbnail Image

India News | Chhattisgarh: IIIT Student Arrested for Creating Obscene Images of Female Students Using AI Tools | LatestLY

2025-10-10
LatestLY
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI tools to create harmful and objectionable content targeting individuals, which constitutes a violation of rights and causes harm to the affected community. The AI system's use directly led to harm through the creation and dissemination of obscene images, resulting in police intervention and legal consequences. Therefore, this qualifies as an AI Incident due to the realized harm caused by the AI system's misuse.
Thumbnail Image

AI Misuse: IIIT Naya Raipur Student Arrested for Creating Fake Obscene Images of Female Students Using AI Tools

2025-10-10
LatestLY
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (AI-based image generation and editing tools) to create harmful content (fake obscene images) targeting individuals, which directly leads to harm in terms of violation of rights and personal dignity. The harm has materialized as evidenced by the arrest and police investigation. Therefore, this qualifies as an AI Incident under the category of violations of human rights or breach of applicable law protecting fundamental rights.
Thumbnail Image

AI Misuse: IIIT Naya Raipur Student Arrested for Creating Obscene Images of Female Students Using AI Tools | LatestLY

2025-10-10
LatestLY
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (AI-based image generation and editing tools) to create harmful and objectionable content targeting individuals, which constitutes a violation of human rights and causes harm to the affected persons. The police arrest and legal proceedings confirm that harm has occurred due to the AI system's misuse. Therefore, this qualifies as an AI Incident.
Thumbnail Image

IIIT Naya Raipur student Syed Rahim Adnan Ali arrested for creating morphed obscene images of female classmates

2025-10-10
OpIndia
Why's our monitor labelling this an incident or hazard?
The incident explicitly involves an AI system used to create morphed obscene images (deepfakes) of female classmates, which directly led to harm in the form of violations of privacy, dignity, and potentially other human rights. The use of AI for generating non-consensual explicit content is a clear case of harm caused by AI misuse. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use directly led to violations of human rights and harm to individuals.