Ashutosh Rana's Deepfake Video Sparks Concerns Over AI Misuse

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Bollywood actor Ashutosh Rana has been targeted by a deepfake video falsely showing him supporting a political party during elections. The AI-generated video, which could lead to character assassination and misinformation, highlights the potential harm of AI misuse. Rana expressed concerns about the ease of creating such misleading content and its impact on personal reputation.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves AI systems used to generate manipulated audio and video content (deepfakes) that have been used to falsely depict public figures endorsing political parties. This misuse of AI has directly led to reputational harm and misinformation during election campaigns, which constitutes harm to communities and violations of rights. The filing of police complaints and FIRs confirms that harm has occurred. Hence, the event meets the criteria for an AI Incident.[AI generated]
AI principles
AccountabilityPrivacy & data governanceRespect of human rightsTransparency & explainabilityRobustness & digital securitySafetyDemocracy & human autonomy

Industries
Media, social platforms, and marketingArts, entertainment, and recreationGovernment, security, and defenceDigital security

Affected stakeholders
OtherGeneral public

Harm types
ReputationalPsychologicalPublic interestHuman or fundamental rights

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Aamir Khan to Ashutosh Rana: Actors who have been deepfaked during election campaigns

2024-05-11
Hindustan Times
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used to generate manipulated audio and video content (deepfakes) that have been used to falsely depict public figures endorsing political parties. This misuse of AI has directly led to reputational harm and misinformation during election campaigns, which constitutes harm to communities and violations of rights. The filing of police complaints and FIRs confirms that harm has occurred. Hence, the event meets the criteria for an AI Incident.
Thumbnail Image

Ashutosh Rana REACTS To His Deepfake Video Supporting a Political Party: 'One Has To Be Cautious' - News18

2024-05-11
News18
Why's our monitor labelling this an incident or hazard?
Deepfake videos are generated using AI systems that manipulate visual and audio content to create realistic but fake representations. The misuse of such AI-generated content to falsely depict actors endorsing political parties constitutes a violation of their rights and causes harm to their reputation and public image. This aligns with the definition of an AI Incident, as the AI system's use has directly led to harm (character assassination, misinformation) and legal consequences. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Ashutosh Rana Breaks Silence on Deepfake Scandal After His Fake Political Video Trends Online: 'This Could Lead to Character Assassination'

2024-05-11
India News, Breaking News, Entertainment News | India.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (deepfake technology) used to create a manipulated video that falsely portrays Ashutosh Rana endorsing a political party. This misuse of AI has directly caused harm to the individual's reputation and poses broader risks to privacy and democratic processes. Since the harm has already occurred (the video circulated and trending), this qualifies as an AI Incident under the definitions provided, specifically as a violation of rights and harm to community trust and individual reputation.
Thumbnail Image

Ashutosh Rana on his deepfake video supporting BJP: 'Takes years to build image'

2024-05-10
India Today
Why's our monitor labelling this an incident or hazard?
The presence of an AI system is clear as the deepfake video is generated using AI technology. The harm is realized as the actor's image and speech were manipulated without consent, leading to misinformation and potential reputational damage, which qualifies as harm to the individual (harm to person or community). Therefore, this event meets the criteria of an AI Incident because the AI system's use has directly led to harm. The article does not merely discuss potential harm or responses but reports on an actual incident of AI misuse causing harm.
Thumbnail Image

Ashutosh Rana falls prey to Deepfake video supporting political party, 'Answerable to my wife..'

2024-05-11
Asianet News Network Pvt Ltd
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (deepfake technology) used to create manipulated video content that falsely represents the actor's political stance. This misuse of AI has directly led to reputational harm and potential misinformation affecting the actor and the public discourse during elections, which can be considered harm to communities and violation of rights. Therefore, this qualifies as an AI Incident due to realized harm caused by the AI system's use.
Thumbnail Image

Ashutosh Rana breaks silence on his deepfake video supporting a political party: 'I would only be answerable to...'

2024-05-11
Daily News and Analysis (DNA) India
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (deepfake technology) used to create manipulated video content. The deepfake video could plausibly lead to harm such as character assassination, misinformation, or political manipulation, which fits the definition of an AI Hazard. However, the article does not report any actual harm or incident resulting from the video, only the potential risks and the actor's commentary on the issue. Therefore, this event is best classified as an AI Hazard, as the deepfake video represents a plausible risk of harm but no direct or indirect harm has been documented in the article.
Thumbnail Image

Ashutosh Rana Reacts After Deepfake Video Of Him Supporting Political Party Goes Viral

2024-05-11
https://www.outlookindia.com/
Why's our monitor labelling this an incident or hazard?
The deepfake video involves an AI system generating manipulated content that falsely attributes political support to the actor, which constitutes a violation of personal rights and reputational harm. Since the fake video has already circulated and caused harm to the actor's image, this qualifies as an AI Incident due to realized harm from AI misuse (character assassination).
Thumbnail Image

Ashutosh Rana addresses controversy over alleged deepfake video supporting political party : Bollywood News - Bollywood Hungama

2024-05-11
Bollywood Hungama
Why's our monitor labelling this an incident or hazard?
Deepfake technology is an AI system that generates synthetic media by manipulating or fabricating visual content. The article explicitly mentions a deepfake video of Ashutosh Rana supporting a political party, which he denies. The use of AI to create this misleading video has directly led to reputational harm (character assassination risk) to the actor, which is a violation of personal rights and harm to community trust. Therefore, this event meets the criteria of an AI Incident due to realized harm caused by the AI system's use.
Thumbnail Image

Viral: Ashutosh Rana's deepfake video showing him supporting a political party amid elections raises concerns

2024-05-11
WION
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system generating a deepfake video, which is an AI-generated manipulated content. The video misrepresents the actor's political stance, which can mislead voters and damage the actor's reputation, thus causing harm to the individual and potentially to the community's trust in information. Since the harm is occurring (the video is circulating and causing concern), this qualifies as an AI Incident under the definitions provided, specifically under harm to communities and violation of rights (reputational harm).