
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
In Bengaluru, AI-generated nude photos of Class 9 students were circulated on Instagram, leading to a police investigation. The images, created using photos from the victims' private Instagram accounts, were shared in a school-related group. Parents suspect an acquaintance is responsible and have filed a complaint with the Cyber Crime Cell.[AI generated]
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions AI-generated nude images created by extracting photos from social media and manipulated to produce harmful content. This use of AI directly led to harm to the victims, including violation of privacy and emotional distress, which falls under violations of human rights and harm to communities. Therefore, this qualifies as an AI Incident due to the realized harm caused by the malicious use of AI systems.[AI generated]