AI-Generated Deepfake Video Falsely Attributes Statements to Indian Home Minister

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

An AI-generated deepfake video falsely depicting Indian Home Minister Amit Shah criticizing Prime Minister Modi and NSA Ajit Doval over 'Operation Sindoor' circulated widely on social media. The video, flagged as fake by official fact-checkers, spread misinformation and risked undermining public trust in government officials.[AI generated]

Why's our monitor labelling this an incident or hazard?

The video is explicitly described as AI-generated deepfake content that falsely attributes statements to a public official, which is a direct use of AI to create misleading content. The misinformation can harm communities by spreading false narratives and undermining trust in public institutions, constituting harm to communities. The fact that the video is circulating and misleading the public means harm is occurring, not just potential. Therefore, this event meets the criteria for an AI Incident due to the AI system's role in generating harmful misinformation that is actively spreading.[AI generated]
AI principles
AccountabilityTransparency & explainabilityRobustness & digital securityDemocracy & human autonomy

Industries
Media, social platforms, and marketing

Affected stakeholders
GovernmentGeneral public

Harm types
ReputationalPublic interest

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Did Amit Shah Ask for PM Narendra Modi's Resignation Over Operation Sindoor and Criticised NSA Ajit Doval for Same? PIB Fact Check Debunks AI-Generated Propoganda Video | 🔎 LatestLY

2025-09-23
LatestLY
Why's our monitor labelling this an incident or hazard?
The video is explicitly described as AI-generated deepfake content that falsely attributes statements to a public official, which is a direct use of AI to create misleading content. The misinformation can harm communities by spreading false narratives and undermining trust in public institutions, constituting harm to communities. The fact that the video is circulating and misleading the public means harm is occurring, not just potential. Therefore, this event meets the criteria for an AI Incident due to the AI system's role in generating harmful misinformation that is actively spreading.
Thumbnail Image

Op Sindoor: Did Amit Shah say Modi should resign over Pakistan's attacks? No; clip is digitally manipulated - Alt News

2025-09-25
Alt News
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI to create a deepfake video, which is an AI system generating manipulated content. Although the clip is fake and no direct harm has yet occurred, the potential for harm through misinformation and public deception is credible. Since no actual harm has materialized and the article focuses on debunking the manipulated clip, this constitutes an AI Hazard rather than an AI Incident. The AI system's involvement is in the creation of the manipulated video, which could plausibly lead to harm if believed or spread further.
Thumbnail Image

No, Amit Shah Did Not Demand PM Modi's Resignation Over Operation Sindoor | BOOM

2025-09-24
BOOMLive
Why's our monitor labelling this an incident or hazard?
The article describes an AI-generated deepfake video falsely attributing statements to a public figure, confirmed by AI deepfake detection tools. While the creation and sharing of deepfakes can lead to misinformation and potential harm, this article primarily reports on the detection and debunking of the deepfake, not on harm that has materialized or a credible imminent risk. Therefore, it does not meet the criteria for an AI Incident or AI Hazard. Instead, it provides complementary information about AI misuse and detection methods, contributing to understanding and mitigating AI-related misinformation risks.