Celebrities Targeted by Harmful Deepfake AI Videos in India

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

AI-generated deepfake videos have targeted several Indian celebrities, including Alia Bhatt, Rashmika Mandanna, Katrina Kaif, Kajol, and public figures like PM Modi. These deepfakes, created using advanced AI and machine learning, have led to violations of personal rights, reputational harm, and widespread misinformation.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly details AI systems used to create deepfakes that have been employed in scams and impersonation, causing direct harm to victims through fraud and defamation. The involvement of AI in generating manipulated audio and video content that deceives people and leads to financial and reputational harm fits the definition of an AI Incident. The harms described include violations of rights and harm to individuals and communities, and the AI system's role is pivotal in enabling these harms. Therefore, this event qualifies as an AI Incident.[AI generated]
AI principles
Respect of human rightsPrivacy & data governanceTransparency & explainabilityAccountabilityRobustness & digital securityDemocracy & human autonomySafetyHuman wellbeing

Industries
Media, social platforms, and marketingGovernment, security, and defenceDigital security

Affected stakeholders
OtherGeneral public

Harm types
ReputationalHuman or fundamental rightsPsychologicalPublic interest

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

DEEPFAKES HOW THEY ARE MADE AND WAYS TO COMBAT THEM | - Times of India

2024-05-04
The Times of India
Why's our monitor labelling this an incident or hazard?
The article explicitly details AI systems used to create deepfakes that have been employed in scams and impersonation, causing direct harm to victims through fraud and defamation. The involvement of AI in generating manipulated audio and video content that deceives people and leads to financial and reputational harm fits the definition of an AI Incident. The harms described include violations of rights and harm to individuals and communities, and the AI system's role is pivotal in enabling these harms. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Alia Bhatt's face morphed on actress Wamiqa Gabbi, watch her viral deepfake video

2024-05-07
MoneyControl
Why's our monitor labelling this an incident or hazard?
The event explicitly describes the use of AI deepfake technology to create misleading videos that have been disseminated, causing harm to the individuals involved. The harm is realized, not just potential, as the actresses express distress and concern about the misuse of their images and voices. This constitutes a violation of rights and harm to communities due to misinformation and identity manipulation. Therefore, this qualifies as an AI Incident under the OECD framework.
Thumbnail Image

Alia Bhatt's Deepfake Shows Her Face Morphed on Wamiqa Gabbi; SHOCKING Video Goes Viral - News18

2024-05-06
News18
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used to create deepfake videos, which are manipulated media generated by AI algorithms. The use of these deepfakes has directly led to harm in the form of violation of personal rights, privacy, and potential reputational damage to the actresses involved. This constitutes a violation of human rights and personal rights under applicable law, fulfilling the criteria for an AI Incident. The harm is realized as the videos have gone viral and caused distress to the victims, not merely a potential risk. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Alia Bhatt Becomes Deepfake Victim, Viral Video Shows Her Face Morphed on Wamiqa Gabbi

2024-05-07
TimesNow
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of deepfake technology, an AI system capable of generating realistic manipulated videos, to create harmful content involving Alia Bhatt and other actresses. This use of AI has directly led to harm in terms of violation of personal rights and potential reputational damage, fitting the definition of an AI Incident under violations of human rights or breach of obligations protecting fundamental rights.
Thumbnail Image

PM Modi called 'THE DICTATOR' in viral deepfake video, here's how he reacted

2024-05-07
Firstpost
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used to create deepfake videos, which are manipulated media generated by AI. While these deepfakes have been widely shared and have caused public buzz, the article does not document any realized harm such as injury, rights violations, or disruption of critical infrastructure. The Election Commission's warnings and directives reflect concern about plausible future harm to electoral integrity. Therefore, this situation fits the definition of an AI Hazard, as the AI-generated deepfakes could plausibly lead to harm (e.g., misinformation affecting elections), but no direct harm has been reported yet.
Thumbnail Image

RSAC 2024 Innovation Sandbox | Reality Defender: Deepfake Detection Platform

2024-05-05
Security Boulevard
Why's our monitor labelling this an incident or hazard?
The article focuses on describing an AI system (Reality Defender's deepfake detection platform) and the broader context of deepfake technology and its associated risks. It highlights the potential harms deepfakes can cause and how the detection tools help mitigate these harms. However, it does not report a new AI Incident (no specific harm event caused by AI is described) nor a new AI Hazard (no new plausible future harm event is detailed). Instead, it provides complementary information about AI-related threats and responses, including technological, societal, and regulatory aspects. Therefore, the article fits the definition of Complementary Information.
Thumbnail Image

Pope Francis to Sachin Tendulkar: Famous people who have been targeted by deepfake scam

2024-05-03
News9live
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses the use of AI-based deepfake technology to create false videos and images of public figures, which have been widely disseminated and caused harm such as misinformation, reputational damage, and violation of personal rights. The AI system's role is pivotal in generating these harmful deepfakes. The harms are realized and ongoing, not merely potential. Hence, this fits the definition of AI Incident, as the AI system's use has directly led to harm to persons and communities through misinformation and unauthorized content.