AI-Generated Deepfake Video Targets Indian Army Chief, Spreads Misinformation

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Pakistani propaganda accounts circulated AI-generated deepfake videos falsely attributing controversial statements to Indian Army Chief General Upendra Dwivedi, including claims about handing over Arunachal Pradesh to China. Indian authorities debunked the videos, warning the public against misinformation and highlighting the reputational harm caused by AI-manipulated content.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event explicitly involves AI systems used to generate fake videos (deepfakes) that have been disseminated widely, causing misinformation and undermining trust in important public figures and institutions. This meets the definition of an AI Incident because the AI system's use has directly led to harm to communities by spreading false narratives and eroding public trust. The government's warning and fact-checking confirm the harm is occurring, not just a potential risk. Hence, the classification as AI Incident is appropriate.[AI generated]
AI principles
AccountabilityRespect of human rightsTransparency & explainabilityDemocracy & human autonomy

Industries
Media, social platforms, and marketingGovernment, security, and defence

Affected stakeholders
GovernmentGeneral public

Harm types
ReputationalPublic interest

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

'Pakistani propaganda': PIB fact-checks Army chief Upendra Dwivedi's video claiming Sonam Wangchuk died in custody; calls it AI | India News - The Times of India

2025-11-28
The Times of India
Why's our monitor labelling this an incident or hazard?
The event involves an AI system used to create a deepfake video, which is a clear AI application. The video falsely claims a custodial death and a statement by the Army Chief, which is misinformation that could harm communities by spreading false narratives and undermining trust in the military and government institutions. Although the misinformation is being actively debunked, the AI-generated content's existence and circulation represent a plausible risk of harm. Since the harm is potential and not confirmed as having occurred, this qualifies as an AI Hazard rather than an AI Incident. The article's main focus is on the fact-check and warning, not on a realized harm event.
Thumbnail Image

Centre flags AI-doctored video of Army Chief, warns against Pakistani disinformation

2025-11-27
India Today
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used to generate fake videos (deepfakes) that have been disseminated widely, causing misinformation and undermining trust in important public figures and institutions. This meets the definition of an AI Incident because the AI system's use has directly led to harm to communities by spreading false narratives and eroding public trust. The government's warning and fact-checking confirm the harm is occurring, not just a potential risk. Hence, the classification as AI Incident is appropriate.
Thumbnail Image

Pak Propaganda Accounts Share AI-Generated Deepfake Of Army Chief Upendra Dwivedi Claiming Arunachal 'Handover To China'; PIB Fact-Checks Fake Video

2025-11-27
Free Press Journal
Why's our monitor labelling this an incident or hazard?
The event explicitly describes an AI-generated deepfake video that falsely attributes statements to a high-ranking military official, which is a clear case of misinformation causing harm to communities and public trust. The AI system's use in fabricating and disseminating this video directly led to the harm described. The fact-checking and warnings by authorities confirm the malicious use and the harm caused. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Fact Check: 'Pakistani Propaganda' Accounts Share Deepfake VIDEO Of President Murmu Claiming Threat To Minorities In India; PIB Debunks Fake News

2025-11-27
Free Press Journal
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake videos that have been shared and are misleading the public with false claims about prominent figures. The harm is realized as misinformation is spreading, which harms communities by undermining trust and potentially inciting social tensions. The AI system's role in generating these deepfakes is pivotal to the harm. Therefore, this event meets the criteria for an AI Incident due to direct harm to communities through misinformation caused by AI-manipulated content.
Thumbnail Image

Fact Check: 'Pakistani Propaganda' Accounts Share AI VIDEO Of Chief Of Army Staff General Upendra Dwivedi Claiming Custodial Death Of Sonam Wangchuk; PIB Debunks Fake News

2025-11-27
Free Press Journal
Why's our monitor labelling this an incident or hazard?
The event involves AI-generated deepfake videos that falsely depict statements by high-profile officials, which is a clear misuse of AI technology to spread misinformation. This misinformation can harm public trust and social cohesion, which fits the definition of harm to communities. The fact that the videos are already being shared and have caused misinformation means the harm is realized, not just potential. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Fake news alert: Pakistani propoganda accounts try to use AI to smear Army Chief and drag WION into it

2025-11-28
WION
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI technology to create a deepfake video that falsely attributes statements to the Army Chief, which is misinformation causing reputational harm and undermining public trust. The harm is realized as the misinformation is actively circulated and flagged by official agencies. The AI system's use in generating the fake video is central to the incident, fulfilling the criteria for an AI Incident due to direct harm to communities through misinformation.
Thumbnail Image

No, COAS Upendra Dwivedi Did Not Propose "Giving Arunachal to China" | BOOM

2025-11-27
BOOMLive
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system used to generate a deepfake voice overlay, which is a clear AI system involvement. The AI's use here is malicious, creating false content that misrepresents a public figure's speech. While the article clarifies that the claim is false and no direct harm has yet occurred, the potential for harm through misinformation and its consequences is credible. Since no actual harm has materialized but there is a plausible risk of harm from the AI-generated deepfake, this fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Centre Warns Against Sharing Misinformation After Fake Army Chief Video Surfaces The government has debunked the video claiming that the Chief of the Army Staff, General Upendra Dwivedi, has suggested handing over Arunachal Pradesh to China to stop Beijing from supporting Pakistan...

2025-11-27
newsonair.gov.in
Why's our monitor labelling this an incident or hazard?
The video is explicitly described as AI-generated and digitally altered, indicating the involvement of an AI system in creating misleading content. The misinformation has already been disseminated, causing harm by undermining trust in a critical national institution, which qualifies as harm to communities. Therefore, this event meets the criteria of an AI Incident due to the realized harm caused by the AI-generated misinformation.