AI-Generated Fake Videos Lead to Death Threats for Pakistani TikToker

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Pakistani TikToker Imsha Rehman faced severe emotional distress and death threats after AI-generated fake explicit videos of her went viral. The scandal forced her offline and prevented her from attending university. Rehman criticized the creators and spreaders of such harmful content for not considering the consequences.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article describes the creation and viral spread of fake explicit videos that are doctored, implying the use of AI or advanced digital manipulation technologies. These videos have caused direct harm to the victim, including death threats and severe emotional distress, fulfilling the criteria for an AI Incident. The AI system's involvement is in the creation and dissemination of harmful fabricated content, which has led to violations of rights and harm to the individual. Therefore, this event qualifies as an AI Incident.[AI generated]
AI principles
Privacy & data governanceRespect of human rightsSafetyAccountabilityRobustness & digital securityHuman wellbeing

Industries
Media, social platforms, and marketing

Affected stakeholders
Women

Harm types
PsychologicalReputationalHuman or fundamental rights

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

After private video scandal, Pakistani TikToker Imsha Rehman gets death threats. 'My Life Is Over'

2025-02-03
The Economic Times
Why's our monitor labelling this an incident or hazard?
The article describes the creation and viral spread of fake explicit videos that are doctored, implying the use of AI or advanced digital manipulation technologies. These videos have caused direct harm to the victim, including death threats and severe emotional distress, fulfilling the criteria for an AI Incident. The AI system's involvement is in the creation and dissemination of harmful fabricated content, which has led to violations of rights and harm to the individual. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

All about the Pakistani influencer Imsha Rehman whose explicit videos went viral on social media - The Times of India

2025-02-03
The Times of India
Why's our monitor labelling this an incident or hazard?
The article describes fake explicit videos that were leaked online and claimed to be created by hackers. Given the context of fake videos, it is reasonable to infer the involvement of AI-based deepfake technology or similar AI systems used to create manipulated content. The harm is realized as the individual faced severe backlash and threats, which are direct harms to the person. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI-generated fake content.
Thumbnail Image

Pakistani TikToker Imsha Rehman breaks silence after explicit videos scandal: 'I'm getting death threats'

2025-02-03
Hindustan Times
Why's our monitor labelling this an incident or hazard?
The event describes the use of AI or similar technology to create fake explicit videos (deepfakes or doctored videos) of a person, which have been circulated online causing serious personal harm including threats to safety and social harm. The involvement of an AI system is reasonably inferred from the creation of doctored videos. The harm is direct and significant, including threats to life and social ostracization, which fits the definition of an AI Incident under violations of rights and harm to communities. Therefore, this event qualifies as an AI Incident.