Janhvi Kapoor recalls teenage deepfake ordeal, lauds Rashmika Mandanna’s stance

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Bollywood actress Janhvi Kapoor revealed that at 15 her photos were morphed into deepfakes without consent, a violation she initially kept secret fearing backlash. Now she commends Rashmika Mandanna for speaking out against a recent deepfake video, urging stronger cyber laws to curb AI-driven image manipulation.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions deepfake videos and morphed images, which are generated using AI systems. The circulation of such content has directly harmed the individuals involved by violating their rights and causing reputational and emotional harm. The misuse of AI technology here has led to a clear violation of personal rights and identity theft, fitting the definition of an AI Incident due to direct harm caused by the AI system's outputs.[AI generated]
AI principles
Privacy & data governanceRespect of human rightsAccountabilityRobustness & digital securityTransparency & explainabilitySafety

Industries
Media, social platforms, and marketingDigital securityGovernment, security, and defence

Affected stakeholders
Women

Harm types
PsychologicalReputationalHuman or fundamental rights

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Janhvi Kapoor saw her deep fake at the age of 15 and didn't complain because of this reason | Hindi Movie News - Times of India

2024-01-04
The Times of India
Why's our monitor labelling this an incident or hazard?
The article involves AI-generated deepfake content, which is an AI system application. However, it does not describe a specific AI Incident where harm has directly or indirectly occurred, nor does it describe a new AI Hazard with plausible future harm. Instead, it provides personal experiences and social commentary related to AI-generated deepfakes, which fits the category of Complementary Information as it adds context and understanding to the broader issue of AI harms without reporting a new incident or hazard.
Thumbnail Image

Janhvi Kapoor lauds Rashmika Mandanna for taking stand against DeepFake video; reveals her teen pics were morphed | Hindi Movie News - Times of India

2024-01-05
The Times of India
Why's our monitor labelling this an incident or hazard?
The article centers on the use of AI technology (DeepFake) to create manipulated videos that have caused harm to individuals' reputations and privacy, which qualifies as harm to persons (a form of harm to communities and individuals). However, the article mainly provides commentary and support regarding past incidents rather than reporting a new or ongoing AI Incident or a plausible future hazard. It serves as complementary information by providing context and societal response to AI misuse in the form of DeepFakes.
Thumbnail Image

Janhvi Kapoor reveals her morphed teen pics and videos were circulated on social media; lauds Rashmika Mandanna for speaking against deepfake | Etimes - Times of India Videos

2024-01-05
The Times of India
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions deepfake videos and morphed images, which are generated using AI systems. The circulation of such content has directly harmed the individuals involved by violating their rights and causing reputational and emotional harm. The misuse of AI technology here has led to a clear violation of personal rights and identity theft, fitting the definition of an AI Incident due to direct harm caused by the AI system's outputs.
Thumbnail Image

Janhvi Kapoor Reveals She Became Victim Of Deep Fake Incident In Childhood - UrduPoint

2024-01-05
UrduPoint
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI-generated deepfake videos, which are a form of AI system producing manipulated content. The harm is realized as the actress was a victim of such manipulation, which affected her personally and emotionally. This constitutes a violation of rights and harm to the individual, meeting the criteria for an AI Incident. The event is not merely a potential risk or a general discussion but a concrete case of harm caused by AI misuse.
Thumbnail Image

Rashmika Mandanna DeepFake Video: Janhvi Kapoor Reveals Her Photos Were Morphed at 15, 'Little Conscious'

2024-01-07
India News, Breaking News, Entertainment News | India.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves a deepfake video, which is a product of AI systems capable of generating realistic but fake videos. The viral spread of this deepfake has caused harm to the person depicted and raised concerns about misuse of AI technology. This fits the definition of an AI Incident because the AI system's use has directly led to harm (violation of rights, emotional harm, reputational damage). The article also references public reactions and calls for legal action, but the primary focus is on the harm caused by the AI-generated deepfake content.
Thumbnail Image

Janhvi Kapoor on Rashmika Mandanna's viral Deepfake video: 'There are bigger and tougher things people...'

2024-01-07
Firstpost
Why's our monitor labelling this an incident or hazard?
Deepfake videos are generated using AI systems that synthesize realistic but fake visual content. The viral spread of such videos has directly led to emotional harm and reputational damage to the individuals involved, fulfilling the criteria of harm to persons and communities. The article explicitly mentions the consequences and repercussions of the deepfake videos, indicating realized harm rather than potential harm. Hence, this is an AI Incident as the AI system's misuse has directly caused harm.
Thumbnail Image

Janhvi Kapoor reveals being deepfake victim during teenage

2024-01-05
ARY NEWS
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated deepfake videos involving Janhvi Kapoor, which are a product of AI systems manipulating images and videos. The harm is realized as these deepfakes were circulated, impacting her privacy and potentially her reputation, which falls under violations of human rights or breach of obligations protecting fundamental rights. Hence, this qualifies as an AI Incident due to the direct harm caused by the AI system's outputs.
Thumbnail Image

Janhvi Kapoor lauds Rashmika Mandanna for condemning deepfake video; reveals her photos were morphed at age 15: "I thought I can't complain" 15 : Bollywood News - Bollywood Hungama

2024-01-05
Bollywood Hungama
Why's our monitor labelling this an incident or hazard?
Deepfake videos are created using AI systems that generate manipulated visual content. The article explicitly mentions victims of deepfake videos, which are AI-generated altered media causing harm to the individuals involved. The harm includes violation of privacy and potential reputational damage, which falls under violations of human rights and harm to communities. Since the harm is realized and the AI system's use directly led to this harm, this qualifies as an AI Incident.
Thumbnail Image

Janhvi Kapoor praises Rashmika standing against deepfake - The Statesman

2024-01-04
The Statesman
Why's our monitor labelling this an incident or hazard?
The article involves AI systems in the form of deepfake technology, which manipulates images using AI. The harm described (manipulated images affecting personal perception and self-expression) is a recognized AI-related harm. However, the article does not report a specific incident of harm occurring at the time of reporting, nor does it describe a new hazard or risk event. Instead, it focuses on personal experiences and public stance, which is complementary information enhancing understanding of AI harms related to deepfakes.
Thumbnail Image

OMG! Jahnvi Kapoor recalls coming across morphed pictures of herself

2024-01-04
Tellychakkar.com
Why's our monitor labelling this an incident or hazard?
The presence of AI systems is clear as deepfake and morphed images are generated using AI technologies. The harm is realized as Janhvi Kapoor has been a victim of these altered images, which constitute a violation of personal rights and can be considered harm to the individual. This fits the definition of an AI Incident because the AI system's use has directly led to harm (violation of rights and reputational harm). The article does not focus on future risks or responses but on the actual harm experienced, so it is not an AI Hazard or Complementary Information.
Thumbnail Image

Janhvi Kapoor's Silent Struggle: Confronting Deepfake Image at 15,'I thought I can't complain...'

2024-01-05
womansera.com
Why's our monitor labelling this an incident or hazard?
Deepfake technology is an AI system that manipulates images and videos to create realistic but fabricated content. Janhvi Kapoor's experience of having her image morphed without consent constitutes harm to her reputation and emotional well-being, which falls under harm to individuals and violation of rights. The event describes realized harm caused by the AI system's misuse, qualifying it as an AI Incident under the framework.