AI-Generated Deepfake of Rahul Gandhi Swearing-In as PM Spreads Misinformation During Indian Elections

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

An AI-generated voice clone of Rahul Gandhi, falsely depicting him swearing in as Prime Minister, went viral on social media during the 2024 Lok Sabha elections. Fact-checkers confirmed the audio was AI-generated, raising concerns about deepfake-driven misinformation influencing public opinion and election integrity.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event explicitly involves the use of AI systems (AI voice cloning) to generate manipulated audio content. The AI-generated deepfake has been disseminated on social media, which can mislead the public and disrupt the democratic process, constituting harm to communities. Since the AI system's use has directly led to misinformation with potential societal harm, this qualifies as an AI Incident under the framework.[AI generated]
AI principles
Transparency & explainabilityAccountabilityRobustness & digital securitySafetyDemocracy & human autonomyPrivacy & data governanceRespect of human rights

Industries
Media, social platforms, and marketingGovernment, security, and defenceDigital security

Affected stakeholders
General public

Harm types
ReputationalPublic interestHuman or fundamental rights

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Fact Check: Viral audio clip of Rahul Gandhi swearing in as PM is AI-generated

2024-04-29
Hindustan Times
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI systems (AI voice cloning) to generate manipulated audio content. The AI-generated deepfake has been disseminated on social media, which can mislead the public and disrupt the democratic process, constituting harm to communities. Since the AI system's use has directly led to misinformation with potential societal harm, this qualifies as an AI Incident under the framework.
Thumbnail Image

Fact Check: AI-Generated Clip Of Rahul Gandhi Swearing In As PM Goes Viral

2024-04-29
Jagran English
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI systems to create a fake video and voice clone of Rahul Gandhi swearing in as Prime Minister, which is spreading misinformation during an election. This constitutes harm to communities by misleading voters and potentially influencing election outcomes. The AI system's use in generating and disseminating this manipulated content directly leads to this harm, qualifying the event as an AI Incident.
Thumbnail Image

Video Of Kamal Nath Doctored With AI Voice To Make False Claim

2024-04-27
NDTV
Why's our monitor labelling this an incident or hazard?
The article describes a deepfake video created using AI voice cloning, which is an AI system generating false audio content. The misuse of this AI system has directly led to misinformation and reputational harm, which qualifies as harm to communities. Therefore, this event is an AI Incident due to the realized harm caused by the AI-generated deepfake content.
Thumbnail Image

AI-Generated Fake Clip Of Rahul Gandhi Swearing-In As PM Goes Viral

2024-04-29
NDTV
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (voice cloning AI) used to create a deepfake audio clip that falsely represents a political leader making a statement about election outcomes. The clip's viral spread on social media can mislead voters and disrupt the democratic process, which is a harm to communities. The harm is realized as the misinformation is actively circulating and influencing public discourse. Therefore, this qualifies as an AI Incident due to the direct role of the AI system in causing harm through misinformation during a sensitive political event.
Thumbnail Image

Fact Check: It's AI Audio In Viral Clip Of Rahul Gandhi Swearing In As PM

2024-04-28
english
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (AI voice cloning) used to create a deepfake audio clip. The harm is realized as the misinformation is actively circulating and influencing public perception during elections, which constitutes harm to communities and potentially violates rights to truthful information. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI-generated deepfake content.
Thumbnail Image

Viral Audio Clip Of Rahul Gandhi Swearing In As PM Is AI-Generated | BOOM

2024-04-28
BOOMLive
Why's our monitor labelling this an incident or hazard?
The article explicitly identifies the audio clip as AI-generated using voice cloning technology, confirmed by multiple AI deepfake detection tools. The clip is being widely shared with misleading political implications during an election, which can harm public trust and democratic integrity, thus constituting harm to communities. The AI system's use in generating and spreading this manipulated content directly leads to this harm. Hence, the event meets the criteria for an AI Incident.
Thumbnail Image

Did Aamir Khan Target PM Modi? No, Video Is Doctored With AI Voice | BOOM

2024-04-30
BOOMLive
Why's our monitor labelling this an incident or hazard?
The video uses AI voice cloning technology to create a deepfake that misrepresents a public figure, which is a direct misuse of AI leading to misinformation and reputational harm. This fits the definition of an AI Incident because the AI system's use has directly led to harm to communities through misinformation and potential violation of rights to accurate information.
Thumbnail Image

Did Ranveer Singh Criticise PM Modi Over Unemployment & Inflation?

2024-04-30
BOOMLive
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate a synthetic voice clone and manipulate video content, constituting the use of AI in a harmful way. The deepfake video could mislead viewers, causing reputational harm and spreading misinformation, which is a form of harm to communities. Since the harm is realized (the video is viral and misleading), this qualifies as an AI Incident due to the direct harm caused by the AI-generated deepfake content.
Thumbnail Image

Viral Video Of Kamal Nath's Article 370 Statement Is A Deepfake | BOOM

2024-04-30
BOOMLive
Why's our monitor labelling this an incident or hazard?
The video is a deepfake created using AI voice cloning technology, which is an AI system generating synthetic audio. The misuse of this AI system has directly led to misinformation being spread, which constitutes harm to communities by misleading the public and potentially influencing political opinions. Therefore, this qualifies as an AI Incident due to the realized harm from the AI-generated deepfake content.
Thumbnail Image

AI-Generated Voice Clone of Rahul Gandhi Goes Viral Amidst Lok Sabha Elections

2024-04-29
NewsX
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate a voice clone of Rahul Gandhi, which was then disseminated widely on social media, causing misinformation and speculation about political outcomes. This constitutes harm to communities by spreading false information that can influence public opinion and election integrity. Since the AI-generated content has already been circulated and caused confusion, this qualifies as an AI Incident due to realized harm from the AI system's use in generating deceptive content.