AI-Generated Obscene Images Used for Blackmail in Uttar Pradesh

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

In Bhadohi, Uttar Pradesh, a cyber cafe operator and his brother used AI to create obscene images of a woman from her social media photos, then blackmailed her for money. The accused extorted Rs 50,000 and threatened further exposure, with police investigating possible additional victims.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly states that AI was used to create obscene images of the woman, which were then used to blackmail her for money. This use of AI directly led to harm through extortion and emotional distress. The AI system's use here is malicious and has caused realized harm to the victim, meeting the criteria for an AI Incident under the definitions provided.[AI generated]
AI principles
Privacy & data governanceRespect of human rights

Industries
Media, social platforms, and marketing

Affected stakeholders
Women

Harm types
Economic/PropertyHuman or fundamental rightsPsychological

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Pay or go viral: UP woman blackmailed using AI-generated obscene photos; Cyber cafe operator, brother booked

2026-05-16
The Times of India
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI was used to create obscene images of the woman, which were then used to blackmail her for money. This use of AI directly led to harm through extortion and emotional distress. The AI system's use here is malicious and has caused realized harm to the victim, meeting the criteria for an AI Incident under the definitions provided.
Thumbnail Image

Cyber cafe operator, brother booked for extorting woman using AI-generated obscene photos

2026-05-16
The New Indian Express
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI to morph photos to create obscene images for blackmail purposes. This use of AI directly led to harm to the victim (emotional harm, violation of privacy, extortion) and potentially others, fulfilling the criteria for an AI Incident. The AI system's use in generating harmful content and facilitating extortion constitutes a violation of rights and significant harm to individuals and communities. Therefore, this is classified as an AI Incident.
Thumbnail Image

Cafe operator booked for extorting from woman over 'obscene photos' created using AI

2026-05-16
ThePrint
Why's our monitor labelling this an incident or hazard?
The AI system was used maliciously to generate altered images that were then weaponized for extortion, causing direct harm to the victim's privacy, emotional well-being, and potentially her social and familial relationships. This constitutes a violation of rights and harm to individuals, fitting the definition of an AI Incident. The involvement of AI in creating the morphed images and the resulting extortion and blackmail clearly meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Cafe operator booked for extorting from woman over 'obscene photos' created using AI

2026-05-16
NewsDrum
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI to morph photos to create obscene images for blackmail purposes. This constitutes a direct use of AI leading to harm, specifically violations of human rights and personal harm through extortion and harassment. The harm is realized, not just potential, as the victims have been extorted and threatened. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use directly led to significant harm to individuals.
Thumbnail Image

Cafe operator booked for extorting from woman over 'obscene photos' created using AI

2026-05-16
DT Next
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions that 'obscene photos' were created using AI and used for extortion and blackmail, which directly harms the victim. The AI system's involvement in generating these photos is central to the incident, and the harm is realized through the criminal act of blackmail. Therefore, this event qualifies as an AI Incident due to the direct link between AI-generated content and harm to individuals' rights and well-being.