Bollywood Celebrities Targeted by AI-Generated Deepfake Scams

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

AI-generated deepfake technology has been used to create and spread manipulated videos and audio of Bollywood celebrities, including Priyanka Chopra, Rashmika Mandanna, Katrina Kaif, Alia Bhatt, and Kajol. These deepfakes have caused reputational harm, privacy violations, and widespread misinformation, highlighting the dangers and lack of legal protections against AI misuse.[AI generated]

Why's our monitor labelling this an incident or hazard?

The use of AI to create deepfake videos involves an AI system generating manipulated content that misrepresents a person, which can lead to harm such as reputational damage, misinformation, and societal disruption. The article indicates that this is a recognized threat by authorities and has already caused viral spread, implying realized harm. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI-generated deepfake content.[AI generated]
AI principles
Privacy & data governanceRespect of human rightsTransparency & explainabilityAccountabilitySafetyHuman wellbeingRobustness & digital security

Industries
Media, social platforms, and marketingArts, entertainment, and recreationDigital security

Affected stakeholders
WomenGeneral public

Harm types
ReputationalHuman or fundamental rightsPsychologicalPublic interest

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Priyanka Chopra's deepfake video goes viral

2023-12-06
Daily Pakistan Global
Why's our monitor labelling this an incident or hazard?
The use of AI to create deepfake videos involves an AI system generating manipulated content that misrepresents a person, which can lead to harm such as reputational damage, misinformation, and societal disruption. The article indicates that this is a recognized threat by authorities and has already caused viral spread, implying realized harm. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI-generated deepfake content.
Thumbnail Image

Deepfake: Priyanka Chopra is the latest victim, no end to the menace

2023-12-06
en.etemaaddaily.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (deepfake technology) used to create manipulated video content that misrepresents a person, which can cause harm to the individual's reputation and deceive the public. This is a direct harm caused by the AI system's use, fitting the definition of an AI Incident due to violation of rights and harm to communities through misinformation and impersonation.
Thumbnail Image

After Kajol, Rashmika Mandanna, Katrina Kaif and Alia Bhatt, Priyanka Chopra becomes latest VICTIM of DEEPFAKE - Times of India Videos

2023-12-06
The Times of India
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly through the use of deepfake technology, which is an AI system capable of generating realistic but fake audio-visual content. The use of these deepfakes has directly led to harm in the form of violations of personal rights, including privacy and potentially defamation, which fall under violations of human rights or breach of obligations protecting fundamental rights. The sharing and viral spread of such manipulated content can also harm communities by spreading misinformation and causing social disruption. Therefore, this qualifies as an AI Incident due to realized harm caused by the AI system's use.
Thumbnail Image

Priyanka Chopra Falls Prey To Deepfake After Rashmika Mandanna, Alia Bhatt; Doctored Audio Goes Viral - News18

2023-12-05
News18
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (deepfake technology) to manipulate video and audio content, resulting in the creation and dissemination of false and harmful media. This misuse of AI has directly led to harm in the form of reputational damage and violation of personal rights of the actresses involved. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to harm to communities and individuals.
Thumbnail Image

Priyanka Chopra falls prey to deepfake after Rashmika, Katrina and Alia

2023-12-05
India Today
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used to generate deepfake videos that manipulate real individuals' images and voices without consent, leading to misinformation and reputational harm. The harm is realized as these videos are circulating online, causing potential violations of rights and harm to communities through misleading content. The AI system's use is central to the incident, fulfilling the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Deepfake Row: Priyanka Chopra is the latest victim after Alia Bhatt, Katrina Kaif

2023-12-06
Mashable India
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used to create deepfake videos that manipulate both visual and audio content of celebrities, leading to misinformation and reputational harm. This constitutes a violation of rights and harm to communities, fitting the definition of an AI Incident. The government's regulatory response is complementary information but the main event is the realized harm from AI misuse.
Thumbnail Image

Priyanka Chopra falls prey to deepfake technology after Rashmika Mandanna, Alia Bhatt, Katrina Kaif : Bollywood News - Bollywood Hungama

2023-12-06
Bollywood Hungama
Why's our monitor labelling this an incident or hazard?
The event clearly involves AI systems, specifically deepfake generative AI technology used to create manipulated videos. The harm is indirect but real, as the deepfakes misrepresent celebrities, potentially violating their rights and causing reputational harm, which falls under violations of human rights or breach of obligations protecting fundamental rights. The circulation of such videos on social media platforms constitutes an AI Incident because the harm (misinformation, reputational damage) is occurring. The government's response is complementary information but does not negate the incident classification.
Thumbnail Image

Priyanka Chopra becomes latest victim of deepfake following Rashmika, Katrina, and Alia Bhatt

2023-12-06
Mashable ME
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (deepfake technology using AI and machine learning) that have been used to create manipulated videos causing harm to the celebrities' rights and reputations. The harm is realized as these videos are circulating online, misleading viewers and potentially causing reputational damage and emotional distress. This fits the definition of an AI Incident because the AI system's use has directly led to violations of rights and harm to individuals and communities. The article also highlights the need for measures to counter such misuse, but the primary focus is on the realized harm from the deepfake videos.
Thumbnail Image

Priyanka Chopra Targeted in the Latest Wave of Deepfake after Alia Bhatt & Katrina Kaif

2023-12-06
TimesNow
Why's our monitor labelling this an incident or hazard?
Deepfake videos are generated by AI systems that create realistic but fabricated content. The viral spread of such manipulated videos with false financial details and audio directly harms the individuals targeted, constituting an AI Incident under the definition of harm to communities and violation of rights. The event describes realized harm through the viral dissemination of manipulated content, not just a potential risk.
Thumbnail Image

Deepfake Scandal: Priyanka Chopra Becomes Latest Victim After Rashmika Mandanna, Katrina Kaif

2023-12-05
Jagran English
Why's our monitor labelling this an incident or hazard?
Deepfake technology is an AI system capable of generating realistic but fake audio-visual content. The malicious use of this AI system to create and spread manipulated videos directly leads to harm by violating the rights of the individuals depicted, including potential reputational damage and emotional distress. The event describes actual harm occurring, not just potential harm, as evidenced by the victims' reactions and public concern. Therefore, this qualifies as an AI Incident under the framework, specifically under violations of human rights and harm to communities.
Thumbnail Image

Priyanka's fake video goes viral as Bollywood struggles with AI

2023-12-06
The Express Tribune
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (deepfake technology) used to create manipulated videos that have been widely circulated, causing harm to the individuals depicted and raising societal concerns. The harms include violation of personal rights (identity theft, non-consensual use of images), reputational damage, emotional distress, and broader social harm through misinformation and sectarian tensions. These harms have materialized, not just potential, making this an AI Incident rather than a hazard or complementary information. The article also references prior incidents and societal impacts, reinforcing the classification.
Thumbnail Image

No end to Deepfake Menace! Priyanka Chopra is the latest victim after Alia, Katrina and Rashmika

2023-12-06
India TV News
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (deepfake technology) to create manipulated videos and images of celebrities, which constitutes a violation of rights and harms to individuals' reputations and communities. These harms have already occurred as the deepfake content is circulating widely. Therefore, this qualifies as an AI Incident. The article also mentions regulatory actions, but the primary focus is on the realized harms caused by the deepfakes.
Thumbnail Image

AI wreaks havoc as Priyanka Chopra falls victim to fake video

2023-12-06
The Daily Star
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (deepfake technology) used to create manipulated videos that have directly led to harm, including reputational damage, emotional distress, and potential social harm through misinformation and sectarian tensions. The misuse of AI-generated deepfakes to spread false content about public figures constitutes a violation of rights and harm to communities. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to significant harm.
Thumbnail Image

Priyanka Chopra Jonas falls victim of deepfake technology

2023-12-07
ARY NEWS
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI (deepfake technology) to create manipulated video content that misleads viewers. The harm is realized as the video circulated and caused confusion and reputational damage to the individual, which fits the definition of an AI Incident under violations of rights and harm to communities. Although the harm is non-physical, it is significant and clearly articulated, with the AI system's role pivotal in creating the deceptive content. Therefore, this qualifies as an AI Incident.
Thumbnail Image

OMG! After Rashmika Mandanna, Alia Bhatt and Katrina Kaif, Priyanka

2023-12-05
Tellychakkar.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used to generate deepfake videos, which are AI-generated synthetic media that manipulate visual and audio content. The harm includes violations of personal rights and reputational damage to the actresses, which falls under violations of human rights or breach of applicable laws protecting individual rights. Since the AI-generated content is actively causing harm through misinformation and unauthorized use of likeness and voice, this qualifies as an AI Incident.
Thumbnail Image

After Rashmika-Katrina-Alia, Now Priyanka Chopra Jonas Trapped In Deepfake Row - Woman's era

2023-12-08
womansera.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems in the form of deepfake technology, which uses AI to generate realistic but fake videos and audio. The harm is realized as the manipulated content has been widely disseminated, causing reputational damage and social fear among the victims and their audiences. This fits the definition of an AI Incident because the AI system's use has directly led to harm to communities and individuals. The event is not merely a potential risk but describes actual harm occurring due to the AI system's misuse.
Thumbnail Image

Priyanka's Deep fake video goes viral on internet

2023-12-07
Khyber News -Official Website
Why's our monitor labelling this an incident or hazard?
The use of AI to create and disseminate a deep fake video that falsely portrays a public figure and spreads misinformation is a direct harm to the individual's rights and reputation, as well as potentially misleading the public. The AI system's use in generating manipulated content that has gone viral meets the criteria for an AI Incident due to realized harm involving violations of rights and harm to communities through misinformation.
Thumbnail Image

Deepfake alert: Priyanka Chopra becomes latest target of Deepfake content, watch visuals

2023-12-06
PTC News
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (deep learning-based deepfake technology) used to create fabricated content that misrepresents real individuals. The misuse of these AI systems has directly led to harm in terms of privacy violations and reputational damage to the celebrities targeted. This fits the definition of an AI Incident because the AI system's use has directly led to harm to persons and communities through manipulated content. Therefore, the event is classified as an AI Incident.
Thumbnail Image

Cyber Law Expert Shares What India Can Do to Fight Deepfake Scams

2023-12-07
Sputnik India
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (deepfake technology) that have directly led to harms such as misinformation, harassment, blackmail, and reputational damage to individuals, which fits the definition of an AI Incident. The article describes ongoing harms caused by AI-generated deepfakes and the societal and legal challenges in addressing them. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information, as the harms are already occurring and linked to AI misuse.
Thumbnail Image

رشمیکا، کترینہ، عالیہ کے بعد پریانکا چوپڑا بھی ڈیپ فیک ویڈیو کا شکار

2023-12-06
ایکسپریس اردو
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used to create deepfake videos and images, which are being widely disseminated, causing harm to the individuals depicted. The harm includes violation of privacy, potential defamation, and reputational damage, which fall under violations of human rights and harm to communities. The involvement of AI in generating these fake videos is direct and central to the harm. The governmental advisory and legal responses are complementary information but do not negate the fact that harm has occurred. Hence, this is classified as an AI Incident.
Thumbnail Image

پریانکا چوپڑا بھی ڈیپ فیک ویڈیو کا شکار

2023-12-07
jang.com.pk
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI (deepfake technology) to create fake videos and audio of Priyanka Chopra, which have been circulated on social media. This constitutes the use of an AI system leading to harm through misinformation and potential reputational damage, fitting the definition of an AI Incident. The harm is realized as the fake video is already public and affecting the individual and community.
Thumbnail Image

عالیہ بٹ اور پریانکا چوپڑا سمیت بولی وڈ کی نامور شخصیات کی جعلی ویڈیوز وائرل

2023-12-06
DawnNews
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used to create deepfake videos and altered audio, which have been disseminated widely causing harm to the celebrities' reputations and privacy. This fits the definition of an AI Incident because the AI system's use has directly led to harm to persons (violation of rights and reputational harm). The government's advisory and legal warnings are responses to this harm but do not change the classification of the event itself as an AI Incident.
Thumbnail Image

پرینکا چوپڑا بھی 'ڈیپ فیک' ٹیکنالوجی کا شکار ہو گئیں

2023-12-06
ARYNews.tv | Urdu - Har Lamha Bakhabar
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (deepfake technology) to produce manipulated media that misleads viewers and harms the reputation of individuals. This constitutes a violation of rights and harm to communities through misinformation and deception. Since the harm (misinformation and reputational damage) is occurring due to the AI system's use, this qualifies as an AI Incident under the framework.
Thumbnail Image

پریانکا چوپڑا کی ڈیپ فیک ویڈیو منظر عام پر آگئی

2023-12-07
ARYNews.tv | Urdu - Har Lamha Bakhabar
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to create deepfake videos, which are synthetic media generated by AI. These videos have been used to misrepresent individuals, causing harm to their reputation and potentially misleading the public. This constitutes a violation of rights and harm to communities through misinformation and impersonation. Since the harm is occurring through the dissemination of these AI-generated fake videos, this qualifies as an AI Incident.
Thumbnail Image

رشمیکا، کترینہ، عالیہ کے بعد پریانکا چوپڑا بھی ڈیپ فیک ویڈیو کا شکار

2023-12-06
dailykhabrain.com.pk
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used to create deepfake videos and images that have been widely disseminated, causing harm to the individuals' reputations and potentially violating their rights. The harm is realized as the fake content is already circulating and affecting the actresses. Therefore, this qualifies as an AI Incident due to violations of rights and harm to individuals caused by AI-generated content. The mention of government advisories and legal actions is complementary information but does not change the primary classification of the event as an AI Incident.
Thumbnail Image

ڈیپ فیک کا تازہ نشانہ: پریانکا چوپڑا کی نامناسب ویڈیو بھی وائرل

2023-12-06
dailykhabrain.com.pk
Why's our monitor labelling this an incident or hazard?
The use of AI-generated deepfake videos directly leads to harm by violating the rights of the individuals depicted, including potential reputational damage and misinformation spread. The AI system's use in creating these manipulated videos is central to the harm occurring. Therefore, this event qualifies as an AI Incident due to violations of rights and harm to communities through deceptive content dissemination.