Deepfakes Target Female Leaders in Pakistan

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Pakistani politician Azma Bukhari is targeted by a sexualised deepfake video, aiming to discredit her as a female leader. This misuse of AI technology to create false and harmful content highlights the growing threat of deepfakes in Pakistan, where poor media literacy exacerbates the impact on women's reputations.[AI generated]

Why's our monitor labelling this an incident or hazard?

Deepfake generation is an AI capability; here it was used to create false, sexualized videos of a public figure, leading to reputational damage, emotional distress, and a court case. This is a realized harm directly stemming from AI misuse.[AI generated]
AI principles
AccountabilityPrivacy & data governanceRespect of human rightsRobustness & digital securitySafetyTransparency & explainabilityDemocracy & human autonomyHuman wellbeingFairness

Industries
Media, social platforms, and marketingGovernment, security, and defenceDigital securityEducation and training

Affected stakeholders
GovernmentGeneral publicOther

Harm types
ReputationalPsychologicalHuman or fundamental rightsPublic interest

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Deepfakes weaponised to target Pakistan's women leaders

2024-12-03
The Straits Times
Why's our monitor labelling this an incident or hazard?
Deepfake generation is an AI capability; here it was used to create false, sexualized videos of a public figure, leading to reputational damage, emotional distress, and a court case. This is a realized harm directly stemming from AI misuse.
Thumbnail Image

Deepfakes weaponised to target Pakistan's women leaders

2024-12-03
GEO TV
Why's our monitor labelling this an incident or hazard?
Deepfake generation and distribution is an explicit use of AI systems manipulating real audio/visual content to produce harmful, fabricated videos of women leaders. These videos have already caused emotional distress, reputational harm, and increased personal risk. The harm has materialized, so this is an AI Incident.
Thumbnail Image

Deepfakes weaponised to target Pakistan's women leaders

2024-12-03
Yahoo News UK
Why's our monitor labelling this an incident or hazard?
The article describes actual misuse of AI deepfake systems—counterfeit sexualized videos and manipulated images—targeting female public figures in Pakistan. These deepfakes have been published and circulated, causing real reputational, psychological, and human rights harms. The incidents stem from malicious use of AI, meeting the criteria for an AI Incident.
Thumbnail Image

"I Was Shattered": Deepfakes Target Women Leaders In Pakistan

2024-12-03
NDTV
Why's our monitor labelling this an incident or hazard?
Deepfake generation is an AI technology misused to fabricate explicit content of women leaders, leading to defamation, mental health impacts, and potential honor‐based violence. The harm is ongoing and directly tied to the AI system’s outputs, constituting a clear AI Incident.
Thumbnail Image

Deepfakes weaponised to target Pakistan's women leaders

2024-12-03
Prothomalo
Why's our monitor labelling this an incident or hazard?
The article describes the malicious use of AI-generated deepfakes to smear and damage the reputations of female public figures, a direct harm to individuals arising from the misuse of an AI system to produce and disseminate false and harmful content.
Thumbnail Image

Deepfakes weaponised to target Pakistan's women leaders

2024-12-03
Dawn
Why's our monitor labelling this an incident or hazard?
Deepfake generation and distribution is an AI-driven process. Here, these AI-generated forgeries have been actively used to harm women’s reputations and mental health, and to threaten their personal safety. Since the AI misuse has directly led to real and significant harm, this event qualifies as an AI Incident.
Thumbnail Image

Pakistan: Deepfakes weaponised to target women leaders

2024-12-03
Khaleej times
Why's our monitor labelling this an incident or hazard?
Deepfakes are explicitly generated via AI and weaponized against female politicians, directly causing reputational and psychological harm (rights violations) and potential physical danger in a conservative context. These are realized harms, not mere potential risks or background context, so this is an AI Incident.
Thumbnail Image

Deepfakes weaponised to target Pakistan's women leaders

2024-12-04
The News International
Why's our monitor labelling this an incident or hazard?
Deepfake generation is an AI capability. The weaponised deepfakes directly harmed the targeted women’s reputations and mental health in a conservative context, violating their rights and exposing them to danger. This is a clear case where AI use has directly led to harm.
Thumbnail Image

Deepfakes weaponised to target Pakistan's women leaders

2024-12-03
The Economic Times
Why's our monitor labelling this an incident or hazard?
Deepfake technology (an AI system) has been used to create and distribute counterfeit sexualized videos of female politicians, directly causing reputational, psychological, and potential physical harm under honor-based social norms. These harms have materialized, so this is an AI Incident.
Thumbnail Image

Deepfakes weaponised to target Pakistan's women leaders

2024-12-03
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
The article describes real incidents where AI-powered deepfake techniques have been used to create and distribute false sexualized videos of female leaders, directly harming their reputation, psychological well-being, and physical safety. This is a clear case of AI misuse resulting in actual harm, fitting the definition of an AI Incident.
Thumbnail Image

Deepfakes targeting Pakistan's women politicians

2024-12-04
The Gulf Today
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly through the creation and dissemination of deepfake videos, which are AI-generated manipulated media. The use of these deepfakes has directly led to harm: psychological distress to the targeted women, reputational damage, and potential threats to their safety, which constitute harm to persons and communities. This fits the definition of an AI Incident because the AI system's use has directly caused harm. The article also mentions legal frameworks and responses, but the primary focus is on the realized harm caused by the AI-generated deepfakes, not just on responses or potential risks.
Thumbnail Image

Deepfakes weaponised to target Pakistan's women leaders

2024-12-03
Daily Journal
Why's our monitor labelling this an incident or hazard?
Deepfakes are AI-generated synthetic media, and their deployment here is explicitly described as targeting a person to discredit her, which is a direct harm to her reputation and dignity. This fits the definition of an AI Incident because the AI system's use has directly led to harm to a person (harassment, reputational damage). The article describes realized harm, not just potential harm, so it is not a hazard or complementary information.
Thumbnail Image

Deepfakes weaponised to target Pakistan's women leaders

2024-12-03
KULR-8 Local News
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions deepfakes, which are AI systems that manipulate audio and video to create false likenesses. The use of these deepfakes to discredit and harass women leaders constitutes a direct harm to their personal and social well-being, fulfilling the criteria for an AI Incident. The harms include psychological injury, reputational damage, and threats to personal safety, which align with violations of human rights and harm to communities. The presence and use of AI systems in generating these deepfakes is clear and central to the incident described.
Thumbnail Image

Deepfakes weaponised to target Pakistan's women leaders

2024-12-03
Brattleboro Reformer
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions deepfakes, which are AI-generated manipulated videos, targeting women leaders in Pakistan with sexualized false content. This use of AI directly causes harm to individuals' reputations, mental health, and potentially their physical safety due to societal norms. The harm is realized and ongoing, not merely potential. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to violations of rights and harm to communities.
Thumbnail Image

Is the full video of Pakistani politician Azma (Uzma) Bukhari real or fake as it goes viral on Twitter/X

2024-12-04
The SportsGrail
Why's our monitor labelling this an incident or hazard?
The event involves an AI system in the form of deepfake technology used to create and spread a fake explicit video of a public figure. This use of AI has directly led to reputational harm and emotional distress to Azma Bukhari, which qualifies as harm to a person. The AI system's misuse is central to the incident, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, as the video has already gone viral and caused distress.