AI-Generated Deepfake Video Targets Indian Army Chief, Spreads Misinformation

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Multiple AI-generated deepfake videos falsely depicting Indian Army Chief General Upendra Dwivedi making controversial statements about military technology and Operation Sindoor have circulated on social media. These manipulated clips, spread by propaganda accounts, have been debunked by Indian authorities, highlighting the harm caused by AI-driven misinformation.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event explicitly involves an AI system generating a deepfake video, which is a form of AI-generated content manipulation. The deepfake falsely attributes statements to a high-ranking military official, spreading misinformation that can harm public trust and national security. This meets the criteria for an AI Incident because the AI system's use has directly led to harm to communities through misinformation and potential undermining of national security. The article confirms the video is AI-generated and warns about its malicious intent, confirming realized harm rather than just potential risk.[AI generated]
AI principles
AccountabilitySafetyRobustness & digital securityTransparency & explainabilityRespect of human rightsDemocracy & human autonomy

Industries
Media, social platforms, and marketing

Affected stakeholders
GovernmentGeneral public

Harm types
ReputationalPublic interest

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Deepfake video falsely attributed to Army Chief Gen Dwivedi surfaces

2026-01-21
http://uniindia.com/~/tripura-congress-mla-demands-anti-racism-bill-and-curriculum-correction/States/news/3691007.html
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system generating a deepfake video, which is a form of AI-generated content manipulation. The deepfake falsely attributes statements to a high-ranking military official, spreading misinformation that can harm public trust and national security. This meets the criteria for an AI Incident because the AI system's use has directly led to harm to communities through misinformation and potential undermining of national security. The article confirms the video is AI-generated and warns about its malicious intent, confirming realized harm rather than just potential risk.
Thumbnail Image

Video Of Heated Exchange Between Reporter And COAS Is A Deepfake | BOOM

2026-01-19
boomlive.in
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly used to generate a deepfake voice-over, which directly led to the creation and dissemination of false information about a public figure. This constitutes harm to communities through misinformation and a violation of rights related to truthful representation. The article confirms the AI-generated nature of the content and the falsity of the claims, indicating realized harm rather than just potential. Therefore, this event meets the criteria for an AI Incident.
Thumbnail Image

AI-Manipulated Video of COAS Dwivedi and a Journalist Over Op Sindoor Viral

2026-01-20
TheQuint
Why's our monitor labelling this an incident or hazard?
The video was created using AI deepfake technology, as confirmed by an AI-detection tool. The manipulated content falsely portrays a sensitive military discussion, which can mislead viewers and cause reputational and societal harm. Since the AI system's use directly leads to misinformation and potential harm to communities, this qualifies as an AI Incident under the framework.
Thumbnail Image

'Deepfake Video Alert': India Calls Out Pak Propaganda Accounts Over Doctored Video of Army Chief on Op Sindoor

2026-01-21
Republic World
Why's our monitor labelling this an incident or hazard?
The article explicitly states that deepfake technology, an AI system, was used to create a doctored video falsely representing the Indian Army Chief's speech. The dissemination of this manipulated content by propaganda accounts is causing harm by misleading the public and potentially affecting social and political stability, which fits the definition of harm to communities. Therefore, this event qualifies as an AI Incident due to the realized harm caused by the AI system's use.
Thumbnail Image

Deepfake Video of Indian Army Chief Spread by Pakistani Propaganda Accounts, PIB Fact Check Issues Alert

2026-01-21
Asianet Newsable
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system used to generate a deepfake video, which is a clear example of AI-generated manipulated content. The deepfake has been actively spread, causing misinformation and undermining trust in a public institution, which is a form of harm to communities. The fact that the misinformation is already circulating and has been officially fact-checked confirms that harm is occurring, not just potential. Hence, this is an AI Incident rather than a hazard or complementary information.