Rashmika Mandanna speaks out against viral deepfake video

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Actress Rashmika Mandanna’s deepfake video went viral in November 2023, prompting Delhi Police to arrest its creator. Mandanna has spoken out on the dangers of AI-generated content, urging awareness and support for victims—especially vulnerable college students—and challenging societal norms that dismiss such digital violations.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event explicitly involves a deepfake video, which is a product of AI-based generative technology. The video caused reputational harm and emotional distress to Rashmika Mandanna, fulfilling the criteria of harm to a person. The AI system's misuse directly led to this harm, and legal actions have been taken against the perpetrator. Hence, this is an AI Incident as the AI system's use has directly led to harm.[AI generated]
AI principles
AccountabilityPrivacy & data governanceRespect of human rightsRobustness & digital securitySafetyTransparency & explainabilityHuman wellbeing

Industries
Media, social platforms, and marketingDigital security

Affected stakeholders
WomenGeneral public

Harm types
ReputationalPsychologicalHuman or fundamental rights

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Rashmika Mandanna REVEALS Why She Decided To Speak Against Her Deepfake Video: 'I Am Really Scared...' - News18

2024-02-01
News18
Why's our monitor labelling this an incident or hazard?
The event explicitly involves a deepfake video, which is a product of AI-based generative technology. The video caused reputational harm and emotional distress to Rashmika Mandanna, fulfilling the criteria of harm to a person. The AI system's misuse directly led to this harm, and legal actions have been taken against the perpetrator. Hence, this is an AI Incident as the AI system's use has directly led to harm.
Thumbnail Image

Bhumi Pednekar Says Deepfake Trend Is A 'Breach Of One's Privacy' - News18

2024-02-03
News18
Why's our monitor labelling this an incident or hazard?
The article explicitly references AI-generated deepfake videos, which are created using AI systems capable of generating realistic fake content. The misuse of this AI technology has directly led to harm, including privacy violations and distress to individuals, fulfilling the criteria for harm to rights and safety. The arrest related to a deepfake video confirms that harm has materialized. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Rashmika Mandanna on her deepfake video: 'I'm really scared for the girls'

2024-02-01
India Today
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system—deepfake technology—that was used maliciously to create a fake video of a person, leading to reputational harm and emotional distress, which are harms to the individual and community. The AI system's use directly led to these harms, fulfilling the criteria for an AI Incident. The legal response and public awareness efforts are complementary but do not negate the incident classification. Therefore, this is an AI Incident due to realized harm caused by the AI system's malicious use.
Thumbnail Image

Exclusive: Bhumi Pednekar reacts to deepfake trend, says 'it's such a violation'

2024-02-02
India Today
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions deepfake videos created through AI and machine learning, which have been used to manipulate videos of celebrities without their consent. This constitutes a violation of privacy and basic rights, fulfilling the criteria for harm to human rights under the AI Incident definition. The harm is realized, not just potential, as the article references actual victims and legal actions taken. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Rashmika Mandanna On Her Deepfake Video; 'No One Would Have Supported Me If This Had Happened In College'

2024-02-01
Mashable India
Why's our monitor labelling this an incident or hazard?
The event describes a deepfake video, which is a product of AI technology used to create manipulated content. The harm includes emotional distress to the individual and societal harm by threatening the dignity and privacy of individuals, especially vulnerable groups like college girls. The government's involvement to take strict action confirms the recognition of harm caused. The AI system's use directly led to these harms, fulfilling the criteria for an AI Incident under violations of rights and harm to communities.
Thumbnail Image

Rashmika Mandanna REVEALS Why She Reacted To Her Deepfake Video: 'Some Girl In Her College...'

2024-02-01
Jagran English
Why's our monitor labelling this an incident or hazard?
The event explicitly involves a deepfake video, which is a product of AI-based generative technology used to create manipulated visual content. The harm caused includes reputational damage and emotional stress to the individual depicted, which falls under harm to persons and violation of rights. The involvement of law enforcement and legal provisions further confirms the recognition of harm. Therefore, this qualifies as an AI Incident because the AI system's use (deepfake generation) directly led to harm.
Thumbnail Image

Rashmika Mandanna throws light on her Deepfake video, 'I wondered if she was a college-going girl'

2024-02-01
Asianet News Network Pvt Ltd
Why's our monitor labelling this an incident or hazard?
The event explicitly involves a deepfake video, which is a product of AI-based generative technology. The video was maliciously created and spread, causing harm to Rashmika Mandanna's reputation and emotional well-being, which fits the definition of harm to a person or group. Legal actions and police involvement further confirm the recognition of harm. Hence, the AI system's use directly led to harm, classifying this as an AI Incident.
Thumbnail Image

Rashmika Mandanna breaks silence on deepfake ordeal - The Statesman

2024-02-01
The Statesman
Why's our monitor labelling this an incident or hazard?
The event describes a realized harm caused by the malicious use of an AI system (deepfake technology) to create and distribute a manipulated video of a public figure, leading to emotional distress and reputational damage. This fits the definition of an AI Incident as the AI system's use directly led to harm to a person (emotional and reputational harm). The article also mentions similar harms to other actresses, reinforcing the pattern of harm from deepfake AI misuse. Therefore, this is classified as an AI Incident.
Thumbnail Image

Finally! Rashmika Mandanna opens up about the REASON to choose

2024-02-04
Tellychakkar.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (deepfake technology) used to create a manipulated video of a person without consent, causing reputational damage and emotional harm. The involvement of law enforcement and legal action confirms the harm has materialized. The AI system's use directly led to harm to the individual, fulfilling the criteria for an AI Incident under violations of rights and harm to persons.