Pakistani Journalist Benazir Shah Targeted by AI-Generated Deepfake Video

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Pakistani journalist Benazir Shah was targeted by an AI-generated deepfake video circulated on social media, leading to harassment and defamation. The incident drew condemnation from Information Minister Attaullah Tarar, who promised action against the perpetrators. Shah declined to pursue legal action, citing concerns over misuse of cybercrime laws.[AI generated]

Why's our monitor labelling this an incident or hazard?

The deepfake video is an AI-generated manipulated content targeting a journalist, which is a clear example of harm caused by an AI system's misuse. The harm is realized as the video has been circulated and is used to harass and defame the journalist, constituting harm to the individual and community. The involvement of the Information Minister and the discussion of legal frameworks further confirm the significance of the incident. Although no legal case is pursued, the harm from the AI system's output is present. Hence, this event meets the criteria for an AI Incident.[AI generated]
AI principles
AccountabilityRespect of human rightsTransparency & explainabilitySafety

Industries
Media, social platforms, and marketing

Affected stakeholders
Other

Harm types
ReputationalPsychological

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Benazir Shah targeted in deepfake, Information Minister vows action

2025-11-17
The Express Tribune
Why's our monitor labelling this an incident or hazard?
The deepfake video is an AI-generated manipulated content targeting a journalist, which is a clear example of harm caused by an AI system's misuse. The harm is realized as the video has been circulated and is used to harass and defame the journalist, constituting harm to the individual and community. The involvement of the Information Minister and the discussion of legal frameworks further confirm the significance of the incident. Although no legal case is pursued, the harm from the AI system's output is present. Hence, this event meets the criteria for an AI Incident.
Thumbnail Image

Tarar takes notice of fake video targeting journalist Benazir Shah

2025-11-17
GEO TV
Why's our monitor labelling this an incident or hazard?
The article explicitly states that an AI-generated fake video was created and spread targeting journalist Benazir Shah, which is a direct use of AI to cause harm through defamation and intimidation. This meets the criteria for an AI Incident as it involves realized harm to a person and potentially to the community through misinformation. The involvement of the AI system in generating the fake video is clear, and the harm is not just potential but actual. Although the journalist declines to pursue legal action, the incident itself has occurred and is recognized by the federal minister, confirming the harm caused by the AI system's misuse.
Thumbnail Image

Journalist Benazir Shah targeted in deepfake; Minister Tarar condemns attack

2025-11-17
Pakistan Today
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions a deepfake video, which is an AI-generated manipulated video, targeting a journalist. The deepfake's circulation constitutes a direct use of an AI system leading to harm through defamation and intimidation. This meets the criteria for an AI Incident as the AI system's use has directly led to harm to a person (the journalist) and potentially to broader community harm by threatening free expression. The minister's condemnation and the journalist's response provide context but do not negate the realized harm caused by the AI-generated deepfake.
Thumbnail Image

Pakistani Journalist Benazir Shah's deepfake Dance Video goes Viral, minister reacts - Pakistan Observer

2025-11-18
Pakistan Observer
Why's our monitor labelling this an incident or hazard?
The deepfake video is an AI-generated manipulated content that has been used to harass and defame a journalist, constituting harm to the individual and potentially violating rights. This fits the definition of an AI Incident because the AI system's use (deepfake generation) has directly led to harm (reputational and harassment). The article describes realized harm rather than just potential harm, so it is not an AI Hazard. It is not merely complementary information or unrelated news, as the AI system's role in causing harm is central to the event.