Celebrity Deepfake Raises AI Misuse Concerns and Sparks Legal Action

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

At the Rising Bharat Summit 2025, actor Nushrratt Bharuccha warned of AI deepfake misuse after a viral video misrepresenting Rashmika Mandanna emerged in 2023. The incident, which led to police investigations and discussions on AI ethics, highlights the growing threat of deepfake technology violating human rights.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves the use of AI (deepfake technology) to create manipulated video content that harmed Rashmika Mandanna's reputation and privacy, which is a violation of rights under applicable law. The harm has materialized, as evidenced by the viral spread of the video, distress caused, and legal actions taken. Therefore, this qualifies as an AI Incident due to direct harm caused by the AI system's use (deepfake generation).[AI generated]
AI principles
Privacy & data governanceRespect of human rightsSafetyAccountabilityRobustness & digital securityTransparency & explainability

Industries
Media, social platforms, and marketingArts, entertainment, and recreationDigital security

Affected stakeholders
Women

Harm types
ReputationalPsychologicalHuman or fundamental rights

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Nushrratt Bharuccha Breaks Silence On Rashmika Mandanna's Deepfake Case, Calls It 'Unreal'

2025-04-08
English Jagran
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI (deepfake technology) to create manipulated video content that harmed Rashmika Mandanna's reputation and privacy, which is a violation of rights under applicable law. The harm has materialized, as evidenced by the viral spread of the video, distress caused, and legal actions taken. Therefore, this qualifies as an AI Incident due to direct harm caused by the AI system's use (deepfake generation).
Thumbnail Image

Nushrratt Bharuccha Calls Rashmika Mandanna's Deepfake Case 'Scary', Says 'It's The World I...'

2025-04-08
News18
Why's our monitor labelling this an incident or hazard?
The event describes a deepfake video created using AI technology that directly led to harm in the form of forgery, reputational damage, and emotional distress to the individual involved. The involvement of AI in creating the morphed video and the resulting legal and social consequences meet the criteria for an AI Incident, as the AI system's use directly caused harm to a person's rights and reputation.
Thumbnail Image

'It's very scary...': Nushrratt Bharuccha breaks silence on Rashmika Mandanna's deepfake case

2025-04-09
The New Indian Express
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (deepfake technology) that has directly led to harm by creating and spreading a fake video of a public figure, which constitutes a violation of personal rights and causes harm to the individual and potentially to communities by spreading misinformation or harassment. This fits the definition of an AI Incident because the AI system's use has directly led to harm (violation of rights and harm to the individual).
Thumbnail Image

Nushrratt Bharuccha REACTS to Rashmika Mandanna's Deepfake case: 'It's very scary and that's what I was saying its unreal...' | - The Times of India

2025-04-08
The Times of India
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (deepfake technology) used to create a fake video that harmed Rashmika Mandanna's reputation and caused public outrage. The harm is realized, as the video went viral and legal action was initiated. The involvement of AI in the creation of the deepfake and the resulting harm to the individual's reputation and potential psychological impact qualifies this as an AI Incident under the framework, specifically under harm to persons and violation of rights (reputation).
Thumbnail Image

Rising Bharat Summit 2025: Nushrratt Bharuccha on AI Deepfakes of Bollywood celebs, says "It is scary, I rely on Cybercrime"

2025-04-08
Moneycontrol
Why's our monitor labelling this an incident or hazard?
The article centers on the potential misuse of AI deepfakes and the vulnerability of celebrities to such technology, which could plausibly lead to harms such as reputational damage or violation of rights. Since no specific harm or incident is described as having occurred, this qualifies as an AI Hazard. The presence of AI deepfake technology and its misuse potential is clear, and the concerns raised indicate plausible future harm, but no direct or indirect harm is reported yet.