
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
Delhi Police arrested Siddhnath Kumar from Bihar for creating and sharing AI-generated objectionable images of Prime Minister Narendra Modi and female leaders on social media. The images, intended to mislead and disrupt public order, led to charges of forgery, defamation, and criminal intimidation. Investigations into the dissemination network are ongoing.[AI generated]
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI to create images that were disseminated with the intent to mislead and disturb public order, which is a form of harm to communities. The AI system's use directly led to legal consequences and police action, indicating realized harm rather than just potential harm. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information. The involvement of AI in generating misleading content that impacts public order fits the definition of an AI Incident due to harm to communities.[AI generated]