Alia Bhatt Targeted in Viral Deepfake Video, Raising Alarm Over AI Misuse

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Bollywood actress Alia Bhatt became the latest victim of AI-generated deepfake videos, with her face superimposed onto another person in an obscene viral clip. The incident, part of a wider trend affecting several celebrities, has sparked public concern and calls for stricter action against AI-driven identity theft and misinformation.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event describes a specific AI Incident where the development and use of AI systems to create deepfake videos has directly led to harm, including bullying, identity theft, and emotional distress to the actress. The AI system's use in generating manipulated content that was shared and caused harm fits the definition of an AI Incident under violations of rights and harm to communities. The involvement of government and social media platforms in response is complementary but does not change the classification of the primary event as an AI Incident.[AI generated]
AI principles
AccountabilityPrivacy & data governanceRespect of human rightsRobustness & digital securityTransparency & explainabilitySafetyHuman wellbeingDemocracy & human autonomy

Industries
Media, social platforms, and marketingArts, entertainment, and recreationDigital security

Affected stakeholders
Women

Harm types
ReputationalPsychologicalHuman or fundamental rightsPublic interest

Severity
AI incident

AI system task:
Content generationRecognition/object detection


Articles about this incident or hazard

Thumbnail Image

Rashmika Mandanna: India actress urges women to speak up on deepfake videos

2023-11-28
Yahoo News
Why's our monitor labelling this an incident or hazard?
The event describes a specific AI Incident where the development and use of AI systems to create deepfake videos has directly led to harm, including bullying, identity theft, and emotional distress to the actress. The AI system's use in generating manipulated content that was shared and caused harm fits the definition of an AI Incident under violations of rights and harm to communities. The involvement of government and social media platforms in response is complementary but does not change the classification of the primary event as an AI Incident.
Thumbnail Image

Alia Bhatt's Deepfake Video Goes Viral: Here's How Government Is Planning To Tackle The Problem

2023-11-27
IndiaTimes
Why's our monitor labelling this an incident or hazard?
The presence of AI systems is explicit in the creation of deepfake videos, which manipulate visuals and audio to create realistic but fake content. The harm is realized as these deepfakes mislead viewers and pose threats to democracy and individual reputations, fulfilling the criteria for harm to communities and violations of rights. The government's acknowledgment of the threat and the need for regulation confirms the significance of the harm. Therefore, this event qualifies as an AI Incident due to the direct harm caused by AI-generated deepfakes and the ongoing impact described.
Thumbnail Image

Alia Bhatt falls prey to deepfake, obscene video sparks concerns over the use of AI - Times of India

2023-11-26
The Times of India
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems through the use of deepfake technology, which is an AI system that generates synthetic media by replacing faces in videos. The harm is realized as the deepfake video causes reputational damage and potential violation of privacy and rights of the person depicted. This fits the definition of an AI Incident because the AI system's use has directly led to harm to a person and communities through misinformation and defamation. The article describes actual harm occurring, not just potential harm, so it is not an AI Hazard or Complementary Information.
Thumbnail Image

Rashmika Mandana breaks silence on deep fake videos: We need to address this before more of us are affected by such identity theft | Hindi Movie News - Times of India

2023-11-28
The Times of India
Why's our monitor labelling this an incident or hazard?
Deepfake videos are generated by AI systems that manipulate visual and audio content to create realistic but fake representations of individuals. The mention of deepfake videos involving multiple celebrities and the concern about identity theft indicates a recognized harm to individuals' rights and reputations. Although the article does not describe a specific new incident causing harm, it references existing harms and the potential for further harm, emphasizing the need for action. Therefore, this is best classified as Complementary Information, as it provides context and a call to address an ongoing AI-related harm rather than reporting a new incident or hazard.
Thumbnail Image

Rashmika Mandanna expresses gratitude to fellow industry colleagues for supporting her amid deepfake scandal | Etimes - Times of India Videos

2023-11-29
The Times of India
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions an AI-generated deepfake video that has been spread online, causing harm to Rashmika Mandanna through identity theft and emotional distress. The misuse of AI technology here directly leads to harm to a person, fulfilling the criteria for an AI Incident under violations of rights and harm to individuals. The support from industry colleagues and the actress's call to address the issue further highlight the realized harm and the need for urgent action.
Thumbnail Image

Rashmika Mandanna thanks film industry friends for support amidst deepfake video scandal | Hindi Movie News - Times of India

2023-11-29
The Times of India
Why's our monitor labelling this an incident or hazard?
Deepfake videos are generated using AI systems that manipulate visual content to create realistic but fake videos. The circulation of such a video has directly harmed Rashmika Mandanna's privacy and reputation, which is a violation of rights and harm to the individual. Since the harm has occurred due to the use of an AI system (deepfake generation), this qualifies as an AI Incident.
Thumbnail Image

After Alia Bhatt's deepfake video, Rashmika Mandanna expresses her concerns, calls it 'scary' | Etimes - Times of India Videos

2023-11-28
The Times of India
Why's our monitor labelling this an incident or hazard?
Deepfake videos are generated using AI systems that manipulate visual content to create realistic but fake videos. The circulation of such videos has directly led to harm in terms of privacy violations, reputational damage, and emotional distress to the individuals involved. The article describes actual incidents where deepfake videos of celebrities have gone viral, causing harm. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm to persons and communities through the spread of manipulated content.
Thumbnail Image

Alia Bhatt Deepfake Video: After Rashmika Mandanna, Kajol and Katrina Kaif, Alia Bhatt falls prey to DEEPFAKE; actress' morphed video goes viral on social media | Etimes - Times of India Videos

2023-11-27
The Times of India
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used to generate deepfake videos that have been disseminated widely, causing harm to the celebrities' reputations and privacy, which constitutes a violation of rights. The harm is realized as the videos have gone viral and affected the individuals involved. The article also highlights societal concern about the impact of such AI-generated content, reinforcing the harm caused. Hence, the event meets the criteria for an AI Incident due to direct harm caused by AI-generated content.
Thumbnail Image

Rashmika Mandanna on the impact of deepfake videos: 'We've normalised them'

2023-11-27
MoneyControl
Why's our monitor labelling this an incident or hazard?
Deepfake videos are generated by AI systems and have directly led to harm in the form of digital deception and violation of personal rights of the actors involved. The article discusses a specific incident involving Rashmika Mandanna and others, indicating realized harm. The involvement of AI in creating deepfakes and the resulting harm to individuals' rights and reputations qualifies this as an AI Incident under the framework, specifically under violations of human rights and harm to communities. The mention of legal obligations and social support further confirms the incident's recognition and impact.
Thumbnail Image

Alia Bhatt's Deepfake Goes Viral After Katrina Kaif, Rashmika Mandanna

2023-11-27
NDTV
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used to create deepfake videos, which are AI-generated manipulated content. The harm includes violation of personal rights and reputational damage to public figures, which falls under violations of human rights or breach of obligations protecting fundamental rights. Since the deepfakes are actively circulating and causing harm, this constitutes an AI Incident. The article also discusses responses to the incident, but the primary focus is on the realized harm from the AI-generated deepfakes.
Thumbnail Image

Rashmika Mandanna On Deepfakes: "We've Normalised Them, It Isn't Ok"

2023-11-28
NDTV
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the creation and dissemination of deepfake videos using AI technology, which have directly harmed the individuals by misrepresenting them and causing emotional distress. The AI system's use in generating these videos is central to the harm described. The event involves the use and misuse of AI systems leading to violations of personal rights and harm to individuals, fitting the definition of an AI Incident. The harm is realized and ongoing, not merely potential, so it is not an AI Hazard or Complementary Information.
Thumbnail Image

Alia Bhatt falls prey to deepfake video after Rashmika Mandanna and Kajol

2023-11-27
Hindustan Times
Why's our monitor labelling this an incident or hazard?
Deepfake videos are generated by AI systems that manipulate visual content to create realistic but fake videos. The article explicitly mentions the use of deepfake AI technology to create videos that falsely depict actors, causing emotional harm and identity theft. The harm is realized as the videos are circulating widely on social media, causing distress to the victims. This fits the definition of an AI Incident because the AI system's use has directly led to harm to persons (emotional and reputational harm) and violations of rights (identity theft and privacy breaches).
Thumbnail Image

Alia Bhatt's DeepFake Video Goes Viral After Rashmika Mandanna, Katrina Kaif's; How Tech Gets Scary - News18

2023-11-28
News18
Why's our monitor labelling this an incident or hazard?
Deepfake videos are created using AI systems that generate manipulated visual content. The viral spread of such videos involving Alia Bhatt and other actresses constitutes a direct harm to their reputations and privacy, which falls under violations of human rights and harm to communities. Since the AI system's use has directly led to this harm, this event qualifies as an AI Incident.
Thumbnail Image

Alarming rise in Deepfake Incidents: Alia Bhatt falls victim to AI after Rashmika Mandanna, Kajol and Katrina Kaif

2023-11-27
mint
Why's our monitor labelling this an incident or hazard?
Deepfake videos are created using AI systems that generate manipulated visual content. The described videos have been circulated on social media, causing reputational harm and distress to the individuals involved, which constitutes harm to communities and individuals. The misuse of AI to create deceptive content that damages personal reputation and causes emotional harm fits the definition of an AI Incident, as the AI system's use has directly led to harm. The article also references public and legal responses, but the primary focus is on the realized harm from the AI-generated deepfakes.
Thumbnail Image

Alia Bhatt Falls Prey To Deepfake After Rashmika Mandanna And Katrina Kaif, SHOCKING Video Goes Viral - News18

2023-11-27
News18
Why's our monitor labelling this an incident or hazard?
Deepfake videos are generated using AI systems that manipulate visual content to create realistic but fake videos. The misuse of such AI-generated deepfakes to depict individuals in compromising or misleading scenarios constitutes a violation of rights and can cause harm to the individuals and communities involved. Since the videos are already viral and causing harm, this qualifies as an AI Incident under the definition of violations of human rights and harm to communities. The article explicitly mentions the harmful use of AI and the viral spread of these videos, confirming realized harm rather than just potential harm.
Thumbnail Image

Amid Deep Fake Controversy, Alia Bhatt Urges Women To Download NCW's New App For Protection; 'Much Needed'

2023-11-28
Mashable India
Why's our monitor labelling this an incident or hazard?
The creation and dissemination of deep fake videos using AI has directly led to harm in the form of violation of personal rights and reputational damage to the celebrities involved, which fits the definition of an AI Incident under violations of human rights or breach of obligations to protect fundamental rights. The article also highlights responses to this harm, but the primary focus is on the realized harm caused by AI-generated deep fakes. Therefore, this event is classified as an AI Incident.
Thumbnail Image

Rashmika Mandanna Calls Deepfake Videos 'Scary' After Alia Bhatt Falls Prey: 'I Want To...' - News18

2023-11-28
News18
Why's our monitor labelling this an incident or hazard?
Deepfake videos are created using AI systems that generate manipulated realistic videos by swapping faces or altering content. The viral spread of such videos constitutes harm to the individuals' reputations and privacy, which falls under violations of human rights and harm to communities. Since the article describes the deepfake videos as already viral and causing distress, this is a realized harm directly linked to AI system use. Therefore, this event qualifies as an AI Incident due to the direct harm caused by AI-generated deepfake content.
Thumbnail Image

News18 Evening Digest: SFJ Fabricated Indian Envoy's 'Heckling Video', Say Sources And Other Top Stories - News18

2023-11-27
News18
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions deepfake videos created using AI technology that have gone viral, causing harm through misinformation and misuse of AI. The AI system's use in generating these videos directly leads to harm to the individuals depicted and potentially to the broader community through misinformation and reputational damage. This fits the definition of an AI Incident as the harm is realized and the AI system's role is pivotal in causing it.
Thumbnail Image

Rashmika Mandanna Talks About Deepfake Videos After Alia Bhatt Falls Prey: 'This Is Not Normal' - News18

2023-11-27
News18
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake videos, which are created using AI systems capable of face swapping and video manipulation. The harm is realized as the videos have gone viral, affecting the individuals' reputations and causing distress. This fits the definition of an AI Incident because the AI system's use has directly led to violations of personal rights and harm to individuals and communities. The article also mentions ongoing legal and social responses, but the primary focus is on the harm caused by the AI-generated deepfakes.
Thumbnail Image

Alia Bhatt Falls Prey to DeepFake, Obscene Video After Rashmika Mandanna, Katrina Kaif And Kajol - Video Goes Viral

2023-11-27
India News, Breaking News, Entertainment News | India.com
Why's our monitor labelling this an incident or hazard?
Deepfake technology is an AI system that generates manipulated videos and images by superimposing faces or altering content. The event describes the creation and viral spread of such deepfakes, which are used maliciously to harm the depicted individuals. This constitutes a violation of rights and harm to individuals, fulfilling the criteria for an AI Incident. The harm is realized as the videos have gone viral and caused distress to the victims, not merely a potential risk.
Thumbnail Image

Alia Bhatt Becomes Latest Target Of DeepFake After Rashmika Mandanna, Katrina Kaif, Obscene Clip Surfaces On Internet

2023-11-26
Zee News
Why's our monitor labelling this an incident or hazard?
The event explicitly describes the use of AI-based deepfake technology to create fake videos and images of celebrities, which are then disseminated online. This constitutes a direct misuse of AI systems leading to harm in the form of violation of personal rights and reputational damage, fitting the definition of an AI Incident. The harm is realized as the videos are circulating and causing concern among the victims and the public. Therefore, this is classified as an AI Incident.
Thumbnail Image

Alia Bhatt's deepfake video goes viral after Rashmika

2023-11-27
India Today
Why's our monitor labelling this an incident or hazard?
The event describes the creation and viral spread of deepfake videos using AI systems that manipulate facial images to produce misleading and harmful content. This misuse of AI technology has directly led to violations of personal rights and reputational harm to the individuals depicted, fulfilling the criteria for an AI Incident under violations of human rights or breach of obligations protecting fundamental rights. The harm is realized and ongoing as the videos are viral and causing concern.
Thumbnail Image

Alia Bhatt Joins the List of Deepfake Victims, Here's How Government Plans to Tackle the Situation

2023-11-29
Mashable India
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used to create deepfake videos that have directly led to harm by violating individuals' rights and causing reputational and emotional damage. The article details actual incidents of harm (deepfake videos of celebrities) and the government's planned regulatory and technological responses. Since harm has already occurred due to the AI system's misuse, this qualifies as an AI Incident. The government's plans and strategies to combat the issue are complementary information but the primary focus is on the realized harm from deepfake misuse.
Thumbnail Image

Alia Bhatt Is The Latest Victim Of Obscene Deepfake Video After Rashmika Mandanna, Kajol; The Internet Is Livid

2023-11-27
Mashable India
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used to create deepfake videos, which are AI-generated synthetic media. The harm is realized as the videos are obscene, deceptive, and have gone viral, causing reputational and privacy harm to the victims. This constitutes a violation of rights and harm to individuals and communities. Therefore, this qualifies as an AI Incident. The mention of government action and penalties is complementary information but does not change the classification of the event as an incident.
Thumbnail Image

Alia Bhatt falls prey to deepfake following Katrina Kaif and Rashmika Mandanna

2023-11-27
Mashable ME
Why's our monitor labelling this an incident or hazard?
Deepfake videos are generated by AI systems that manipulate visual content to create realistic but fake videos. The creation and spread of such deepfakes constitute misuse of AI technology causing harm to the reputation and privacy of the celebrities involved, which can be considered harm to communities and violations of rights. Since the harm is occurring (videos have emerged online), this qualifies as an AI Incident. The article also discusses government measures, but the primary focus is on the realized harm from AI misuse.
Thumbnail Image

After Rashmika Mandanna, Alia Bhatt Becomes Victim Of Obscene Deepfake Video

2023-11-27
english
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of deepfake AI technology to create and disseminate obscene videos falsely showing Alia Bhatt, which is a direct misuse of AI leading to harm. The harm includes violation of personal rights, reputational damage, and potential psychological harm to the individual, as well as societal harm through misinformation and exploitation. The involvement of AI in generating the deepfake content is clear, and the harm is realized, not just potential. Hence, this event meets the criteria for an AI Incident.
Thumbnail Image

Fact Check: Viral Clip Of Actor Alia Bhatt Posing In A Co-Ord Set Is A Deepfake

2023-11-28
english
Why's our monitor labelling this an incident or hazard?
The video is described as a deepfake, which is an AI-generated manipulated video. The event concerns the use of AI to create misleading content that falsely depicts a person, which can cause harm such as misinformation or reputational damage. Since the video is already viral and spreading misinformation, this constitutes an AI Incident due to harm to communities through misinformation and potential violation of rights (e.g., privacy, reputation).
Thumbnail Image

Rashmika Mandanna Reacts After Alia Bhatt's Viral Deepfake Video: 'We've Normalised Them But It Isn't Okay'

2023-11-28
english
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (deepfake technology) used to create manipulated videos that have been widely disseminated, causing harm to the individuals depicted. This constitutes a violation of personal rights and can be considered harm to communities and individuals. The AI system's use has directly led to this harm, fulfilling the criteria for an AI Incident. The discussion of normalization and calls to speak up further emphasize the impact of these AI-generated harms.
Thumbnail Image

After Rashmika Mandanna, Kajol and Katrina Kaif, Alia Bhatt's DEEPFAKE video goes viral

2023-11-27
TimesNow
Why's our monitor labelling this an incident or hazard?
Deepfake videos are created using AI systems that generate realistic but fake videos of individuals without their consent. The viral spread of such videos can cause significant harm to the individuals depicted, including reputational damage and violation of privacy rights. Since the event reports that these deepfake videos are actively circulating and impacting multiple celebrities, the harm is realized and ongoing. This meets the criteria for an AI Incident as the AI system's use has directly led to harm to persons and communities.
Thumbnail Image

Alia Bhatt, the new victim of deepfake - Telugu News - IndiaGlitz.com

2023-11-28
IndiaGlitz.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to create deepfake videos, which are manipulated media that can cause harm to individuals' reputations and privacy, constituting violations of rights. The harm is realized as the videos have circulated widely and caused public concern. Therefore, this qualifies as an AI Incident because the development and use of AI systems have directly led to violations of personal rights and harm to the individuals targeted by the deepfakes.
Thumbnail Image

Alia Bhatt, Katrina Kaif, To Rashmika Mandanna: Celebs Who Fell Victims To Deepfake Videos And Photos

2023-11-27
Jagran English
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI and machine learning to create deepfake videos and images that have been distributed online, harming the celebrities' privacy and causing emotional distress. This constitutes a violation of rights and harm to individuals and communities through misinformation and identity theft. The AI system's use directly led to these harms, fulfilling the criteria for an AI Incident rather than a hazard or complementary information. The mention of societal concern and calls for education further support the recognition of realized harm.
Thumbnail Image

Alia Bhatt Falls Prey To DeepFake After Rashmika Mandanna, Katrina Kaif; Obscene Video Surfaces Online

2023-11-27
Jagran English
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used to create deepfake videos that manipulate the facial features of celebrities onto other bodies, resulting in harmful and misleading content. The harm is realized as the affected individuals experience emotional distress and reputational damage, which are violations of their rights and dignity. The misuse of AI technology here directly leads to these harms, meeting the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Rashmika Mandanna REACTS To 'Scary' Deepfake Videos After Alia Bhatt's Obscene Photos Surfaces Online

2023-11-28
Jagran English
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used to create deepfake videos, which are manipulated media generated by AI. The harm includes emotional distress to the victims and potential violation of their rights, such as privacy and reputation, which falls under violations of human rights or breach of obligations protecting fundamental rights. The harm is realized as the videos have circulated online and caused distress. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's misuse.
Thumbnail Image

Alia Bhatt becomes latest victim of deepfake

2023-11-28
The Express Tribune
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI deepfake technology to create manipulated videos and images of public figures, causing harm to their privacy and reputation. The harm is realized and ongoing, as the manipulated content is circulating widely and affecting the individuals involved. This meets the criteria for an AI Incident because the AI system's use has directly led to harm to persons and communities through misinformation and digital deception.
Thumbnail Image

Alia Bhatt becomes latest victim of deepfake days after Kajol's and Rashmika Mandanna's viral videos

2023-11-27
Asianet News Network Pvt Ltd
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (deepfake technology) used maliciously to create and spread manipulated videos that harm individuals' rights and reputations, constituting a violation of human rights and personal dignity. Since the harm is realized and ongoing, this qualifies as an AI Incident. The mention of government action is complementary but does not change the primary classification.
Thumbnail Image

Rashmika Mandanna talks about deepfake hours after Alia Bhatt falls victim; says "I felt afraid..."

2023-11-27
Asianet News Network Pvt Ltd
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used to create deepfake videos, which have directly led to harm in the form of fear, distress, and violation of personal rights of the celebrities involved. The deepfake videos are maliciously created and circulated, causing reputational and emotional harm. The involvement of legal actions and public support further confirms the recognition of harm. Hence, this is an AI Incident as the AI system's use has directly led to harm to persons (violation of rights and emotional harm).
Thumbnail Image

SHOCKING! Alia Bhatt becomes latest victim of deepfake, obscene video surfaces online

2023-11-27
India TV News
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake videos that have been used to create manipulated obscene content of a person without consent. This constitutes a violation of rights and harm to the individual and community by spreading harmful misinformation and defamation. The harm is realized as the videos are circulating online and causing distress. The government's response further confirms the recognition of harm and the need for mitigation. Hence, it meets the criteria for an AI Incident as the AI system's malicious use has directly led to harm.
Thumbnail Image

Alia Bhatt falls victim to vulgar deepfake video after Rashmika Mandanna and Kajol, fans slam AI's misuse

2023-11-27
Daily News and Analysis (DNA) India
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (deepfake technology) used to create manipulated video content that directly harms the reputation and dignity of a person (Alia Bhatt). This constitutes a violation of rights and harm to the individual, fitting the definition of an AI Incident. The harm is realized as the video is circulating and causing distress, not merely a potential risk. Therefore, this is classified as an AI Incident due to the direct misuse of AI causing harm to a person.
Thumbnail Image

Bollywood actress Alia Bhatt latest victim of Deepfake video

2023-11-27
GULF NEWS
Why's our monitor labelling this an incident or hazard?
The use of Deepfake technology to create and share manipulated videos that falsely depict individuals in obscene or compromising situations constitutes a violation of their rights, including privacy and potentially other fundamental rights. The AI system's use here directly leads to harm by damaging reputations and causing emotional distress to the victims. Therefore, this event qualifies as an AI Incident due to violations of human rights and harm to individuals caused by the AI-generated content.
Thumbnail Image

Rashmika Mandanna on the increase in Deepfake videos: 'Extremely scary not only for me but...'

2023-11-29
Firstpost
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used to generate deepfake videos, which are a form of AI-generated manipulated content. The harm is realized as the videos have been circulated, causing emotional distress and reputational damage to the individuals involved. This constitutes harm to communities and individuals, including violations of rights related to identity and privacy. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm.
Thumbnail Image

Alia Bhatt falls victim to Deepfake after Rashmika Mandanna, Katrina Kaif; obscene video goes viral

2023-11-27
Firstpost
Why's our monitor labelling this an incident or hazard?
The event explicitly describes the use of AI deepfake technology to create manipulated videos that have been disseminated online, causing harm to the actresses involved through false and obscene portrayals. This constitutes a violation of rights and harm to communities, fitting the definition of an AI Incident. The harm is realized and ongoing, not merely potential, as the videos are already viral and causing concern. Therefore, this is classified as an AI Incident.
Thumbnail Image

Alia Bhatt becomes victim of deepfake video after Rashmika Mandanna

2023-11-27
The News International
Why's our monitor labelling this an incident or hazard?
The article describes the creation and viral spread of AI-generated deepfake videos depicting actresses in indecent scenarios. The use of AI to fabricate such videos directly leads to harm by damaging reputations, spreading misinformation, and violating the individuals' rights to privacy and dignity. This constitutes a violation of human rights and harm to communities through misinformation and identity misuse, fitting the definition of an AI Incident.
Thumbnail Image

After Rashmika Mandanna & Kajol, Alia Bhatt Becomes a Target of Deepfake Video

2023-11-27
TheQuint
Why's our monitor labelling this an incident or hazard?
Deepfake videos are generated using AI systems that manipulate visual content to create realistic but fake videos. The viral spread of such videos targeting individuals constitutes harm to their reputation and privacy, which falls under harm to communities and individuals. The article describes actual occurrences of deepfake videos being circulated, not just potential risks, thus qualifying as an AI Incident due to realized harm caused by AI misuse.
Thumbnail Image

After Rashmika Mandanna, Alia Bhatt falls victim to deepfake video

2023-11-27
ARY NEWS
Why's our monitor labelling this an incident or hazard?
Deepfake videos are generated using AI systems that manipulate visual content to create realistic but fake videos. The viral spread of such videos of celebrities like Alia Bhatt constitutes a violation of their rights and causes harm to their reputation and emotional well-being. Since the AI system's use has directly led to harm (emotional and reputational), this qualifies as an AI Incident under the framework, specifically under violations of human rights or breach of obligations protecting fundamental rights.
Thumbnail Image

Rashmika Mandanna speaks up on rise in deepfake cases

2023-11-29
ARY NEWS
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used to create deepfake videos, which are AI-generated manipulated content. The harm is realized as these videos have been circulated, causing reputational and personal harm to the victims, which falls under violations of human rights and harm to communities. Therefore, this is an AI Incident due to the direct harm caused by the AI-generated deepfake videos.
Thumbnail Image

Deepfake: Alia Bhatt becomes next victim, obscene video goes viral

2023-11-27
mid-day
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (deepfake technology) to create harmful fabricated videos that have been widely disseminated, causing harm to the individuals targeted. This fits the definition of an AI Incident because the AI system's use has directly led to violations of personal rights and harm to the affected persons. The harm is realized and ongoing, as the videos have gone viral and the victims have publicly addressed the issue.
Thumbnail Image

Alia Bhatt falls prey to 'deepfake video' scandal

2023-11-27
Daily Pakistan Global
Why's our monitor labelling this an incident or hazard?
The presence of an AI system is explicit as deepfake technology uses AI and machine learning to generate manipulated videos. The event involves the use of AI (use phase) to create a fake video that can harm the reputation and privacy of the individual depicted, which is a violation of personal rights and can be considered harm to communities or individuals. Since the manipulated video is already circulating, the harm is realized, making this an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Deepfake Video Alia Bhatt Doing Obscene Gestures Surfaces Online After Rashmika Mandanna's Morphed Clip Went Viral

2023-11-27
Free Press Journal
Why's our monitor labelling this an incident or hazard?
The event clearly involves AI systems, specifically deepfake technology, which is used to create manipulated videos that misrepresent real people. The harm caused includes violations of personal rights and reputational damage, which fall under violations of human rights or breaches of obligations intended to protect fundamental rights. Since the harm is realized and ongoing due to the circulation of these videos, this qualifies as an AI Incident. The involvement of police and public concern further supports the classification as an incident rather than a hazard or complementary information.
Thumbnail Image

After Rashmika Mandanna, Alia Bhatt Becomes Victim of Deepfake Scandal in Disturbing Viral Video | 🎥 LatestLY

2023-11-27
LatestLY
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (deepfake technology) used to create manipulated content that harms an individual's rights and dignity. The deepfake video directly leads to harm by violating privacy and potentially causing reputational damage, which falls under violations of human rights or breach of obligations protecting fundamental rights. Therefore, this qualifies as an AI Incident.
Thumbnail Image

Alia Bhatt's Becomes The Latest Deepfake Victim After Kajol, Katrina Kaif & Rashmika Mandanna, Actress' Morphed Video Sparks Concern Over Misuse Of AI!

2023-11-27
Koimoi
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (deepfake technology) used to create manipulated videos that have been widely circulated, causing harm to the individuals depicted and raising societal concerns. The harm is realized, not just potential, as the videos are already viral and objectionable. This fits the definition of an AI Incident because the AI system's use has directly led to violations of rights and harm to communities. The article also calls for regulation and ethical measures, but the primary focus is on the incident of harm caused by the deepfake videos.
Thumbnail Image

Alia Bhatt is the latest target of a deepfake video after Rashmika Mandanna and Katrina Kaif

2023-11-27
WION
Why's our monitor labelling this an incident or hazard?
Deepfake videos are created using AI systems that generate realistic but fake content by superimposing faces onto other bodies. The event reports that these videos have gone viral, causing distress and harm to the celebrities involved, which is a direct harm to their personal rights and reputations. The involvement of AI in generating these videos and the resulting harm to individuals' rights and privacy meets the criteria for an AI Incident. The event is not merely a potential risk or a general discussion but documents actual harm occurring due to AI misuse.
Thumbnail Image

Privacy perils: Alia Bhatt becomes latest victim of deepfake tech

2023-11-27
ARAB TIMES - KUWAIT
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated deepfake videos and images that have been used to create inappropriate and manipulated content of celebrities. The use of AI in creating these deepfakes directly leads to harm in terms of privacy violations and ethical concerns. Since the harm is realized and ongoing (videos circulating on social media), this qualifies as an AI Incident under the framework, specifically under violations of human rights or breach of obligations protecting fundamental rights (privacy).
Thumbnail Image

Kajol's Dress Changing Video To Rashmika's Morphed Images To Alia Bhatt's Obscene Video: DeepFake Technology Decoded

2023-11-27
International Business Times, India Edition
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (deep learning-based deepfake technology) to create altered videos that have led to harm, including violations of personal rights and reputational damage to individuals. The creation and distribution of such deepfake videos directly cause harm to the affected celebrities, fitting the definition of an AI Incident. Additionally, the article discusses regulatory responses, but the primary focus is on the realized harm caused by the misuse of AI deepfake technology.
Thumbnail Image

Alia Bhatt falls victim to Deepfake technology, video goes viral

2023-11-27
KalingaTV
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (deepfake technology) used maliciously to create manipulated video content that harms the individual depicted (Alia Bhatt) by misrepresenting her and potentially damaging her reputation. The harm is realized as the video has gone viral, causing reputational and privacy harm. This fits the definition of an AI Incident because the AI system's use has directly led to harm to a person (violation of rights and reputational harm).
Thumbnail Image

Deepfake Vs. Real Videos: 5 Tips To Distinguish AI-Generated Content

2023-11-29
Science Times
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems used to generate deepfake videos, which can cause harm by spreading misinformation and undermining trust. However, it does not describe a particular event where such harm has occurred or a specific incident involving AI malfunction or misuse leading to harm. It also does not present a direct warning of imminent harm but rather general advice and concerns. Therefore, it is best classified as Complementary Information, as it provides context and awareness about AI deepfakes and their societal implications without reporting a concrete AI Incident or AI Hazard.
Thumbnail Image

After Katrina Kaif and Rashmika Mandanna, Alia Bhatt becomes the new target for Deepfake video

2023-11-27
Bollywood Bubble
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of deepfake AI technology to create harmful and non-consensual content targeting a person, which constitutes a violation of rights and harm to the individual and community. The deepfake videos are actively spreading, causing realized harm. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's misuse.
Thumbnail Image

Deep Fake Video Of Alia Bhatt ; Spreading In Social Media - News Portal

2023-11-28
The PrimeTime
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (deep learning-based deepfake generation) used to create manipulated videos that are being widely disseminated, causing harm to individuals' reputations and potentially misleading the public. This constitutes harm to communities and individuals through misinformation and violation of personal rights. Since the harm is occurring (videos are spreading and causing concern), this qualifies as an AI Incident. The article also discusses responses and potential regulation, but the primary focus is on the realized harm from the deepfake videos.
Thumbnail Image

Alia Bhatt Falls Prey to Deepfake Videos: The Concerning Case Of Advanced Technology - Woman's era

2023-11-28
womansera.com
Why's our monitor labelling this an incident or hazard?
Deepfake videos are generated by AI systems that manipulate visual content to create realistic but fake videos. The article details actual harm occurring to celebrities through the spread of these videos, including privacy violations and reputational damage. This fits the definition of an AI Incident because the AI system's use has directly led to harm to individuals' rights and communities. The involvement of AI in the creation and dissemination of these videos is explicit and central to the harm described.
Thumbnail Image

Alia Bhatt becomes the most recent target of a deepfake video

2023-11-28
Global Village Space
Why's our monitor labelling this an incident or hazard?
Deepfake technology is an AI system that generates realistic but fabricated videos by manipulating images. The creation and dissemination of a deepfake video of Alia Bhatt, which portrays inappropriate gestures falsely attributed to her, directly harms her reputation and emotional well-being. This misuse of AI technology constitutes a violation of rights and harm to the individual, fitting the definition of an AI Incident. The article describes the harm as occurring, not just potential, and references similar prior incidents and public concern, reinforcing the classification as an AI Incident.
Thumbnail Image

Alia Bhatt falls prey to deepfake - GG2

2023-11-27
GG2
Why's our monitor labelling this an incident or hazard?
The deepfake videos are created using AI systems that generate manipulated visual content by replacing faces, which is a clear use of AI technology. The videos have gone viral, causing harm to the individuals depicted by misrepresenting them in obscene or compromising scenarios, which is a violation of their rights and harms their reputation. This harm is realized and ongoing, not merely potential. Therefore, this event qualifies as an AI Incident due to the direct harm caused by the AI-generated deepfake content.
Thumbnail Image

Alia Bhatt now trapped in the deepfake controversy

2023-11-27
PagalParrot
Why's our monitor labelling this an incident or hazard?
Deepfake videos are generated using AI systems that manipulate visual content to create realistic but fake representations of individuals. The creation and dissemination of such videos can cause harm to the reputation and privacy of the people involved, constituting a violation of their rights. The article describes actual deepfake videos circulating online, indicating that harm is occurring or has occurred. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI-generated deepfake content.
Thumbnail Image

Deepfake in Bollywood: Alia Bhatt is the latest victim

2023-11-30
The Frontier Post
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used to generate deepfake content, which is a form of AI-generated manipulated media. The deepfake videos and altered images have been released and gone viral, causing realized harm to the celebrities' privacy and potentially their reputations. This constitutes a violation of rights and harm to communities, meeting the criteria for an AI Incident. The harm is direct and materialized, not merely potential or speculative.
Thumbnail Image

Deepfake: The cutest Bollywood Actress Alia Bhatt is the new Victim of the Latest AI Technology

2023-11-29
technosports.co.in
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (deepfake technology) to create manipulated videos that harm individuals' reputations, which constitutes a violation of rights and harm to communities. The harm is realized as the videos are already circulating and affecting the victims. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's misuse.
Thumbnail Image

Deepfake video row: Alia Bhatt's deepfake ordeal sparks alarm over AI misuse following obscene video circulation

2023-11-27
PTC News
Why's our monitor labelling this an incident or hazard?
The event explicitly describes the use of AI systems (deepfake technology) to create synthetic media that falsely portrays individuals in inappropriate contexts. This misuse of AI has directly led to violations of privacy and personal rights, which falls under harm to individuals and communities. The circulation of such content causes reputational and psychological harm, meeting the criteria for an AI Incident. Therefore, this event is classified as an AI Incident due to the realized harm caused by the AI system's malicious use.
Thumbnail Image

Alia Bhatt deepfake video: Brahmastra actress latest victim of AI-generated misinformation

2023-11-27
NEWS9LIVE
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to create deepfake videos that have been widely disseminated, causing harm to the individuals depicted and potentially to the public by spreading false information. This fits the definition of an AI Incident because the AI system's use has directly led to harm to communities and individuals through misinformation and reputational damage. The article also references ongoing legal and governance responses, but the primary focus is on the realized harm from the AI-generated deepfakes.
Thumbnail Image

Deepfakes: Alia Bhatt Video Sparks Social Media Uproar

2023-11-27
Sputnik India
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used to create deepfake videos and images that have been maliciously used to harm individuals by spreading false and obscene content. This constitutes a violation of personal rights and causes harm to the individuals and communities involved. Since the harm is occurring and the AI system's role is pivotal in generating the harmful content, this qualifies as an AI Incident under the framework.
Thumbnail Image

After Alia Bhatt deepfake video goes viral, know these top 10 crucial safety tips

2023-11-27
HT Tech
Why's our monitor labelling this an incident or hazard?
The presence of an AI system is clear as deepfake videos are generated using AI techniques. The article describes an actual deepfake video circulating online, which is a misuse of AI that can cause harm such as reputational damage and misinformation. However, the article does not detail a specific new AI Incident with direct or indirect harm beyond the general mention of the viral video and similar past cases. Instead, it focuses on educating the public with safety tips to mitigate harm. This aligns with the definition of Complementary Information, as it provides context and guidance related to AI harms without reporting a new primary AI Incident or AI Hazard.
Thumbnail Image

After Rashmika Mandanna, Katrina Kaif, Kajol, Alia Bhatt falls victim to deepfake, morphed video goes viral

2023-11-27
ETV Bharat News
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (deepfake technology) used to create a morphed video of a celebrity, which is then circulated online causing reputational harm and distress. This misuse of AI leads to a violation of personal rights and harms the community by spreading false and harmful content. Since the harm is realized and directly linked to the AI system's use, this qualifies as an AI Incident under the framework.
Thumbnail Image

संदीप रेड्डी वांगा ने फिल्म का नाम क्यों रखा 'एनिमल', रणबीर कपूर ने किया खुलासा | ranbir kapoor reveal why was sandeep reddy vanga choose animal title for upcoming film

2023-11-27
Webdunia
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI technology (deepfake generation) and highlights the misuse of AI to create misleading videos, which can cause harm to individuals' reputations and potentially to communities by spreading misinformation. However, the article does not report a specific incident of harm occurring but rather the ongoing misuse and public concern, indicating a plausible risk of harm.
Thumbnail Image

कहीं आपकी भी तो नहीं बन रही फेक वीडियो, Deepfake से ऐसे रह सकते हैं सेफ, यहां जानें सारी जरूरी टिप्स - How to be secure through deepfake video and image, know the tips and and details here

2023-11-29
दैनिक जागरण (Dainik Jagran)
Why's our monitor labelling this an incident or hazard?
Deepfake technology involves AI systems that generate realistic fake videos or images. The article mentions real incidents where celebrities have been affected by deepfakes, indicating actual harm to individuals' reputations and privacy. Since the AI system's use has directly led to harm (reputational and possibly psychological harm to the celebrities), this qualifies as an AI Incident. The article's focus on safety tips is complementary but the core issue is the realized harm caused by deepfake AI content.
Thumbnail Image

Deepfake video: इन अभिनेत्रियों के बाद अब आलिया भट्ट हुई डीपफेक वीडियो का शिकार

2023-11-27
SamacharJagat
Why's our monitor labelling this an incident or hazard?
Deepfake videos are generated using AI-based techniques that synthesize realistic but fake visual content. The misuse of such AI systems to create and spread misleading videos constitutes a violation of individuals' rights and can cause harm to communities by spreading misinformation and damaging reputations. Since the article describes actual circulation of these deepfake videos and the harm they cause, this qualifies as an AI Incident under the framework, specifically under violations of human rights and harm to communities.
Thumbnail Image

रश्मिका, काजोल के बाद वायरल हुआ आलिया भट्ट का डीपफेक वीडियो, अश्लील हरकते करती लड़की पर बना क्लिप

2023-11-27
Navbharat Times
Why's our monitor labelling this an incident or hazard?
The event clearly involves AI systems, specifically deepfake technology, which is an AI method for generating synthetic media by swapping faces in videos. The use of such AI-generated content to create and distribute sexually explicit or misleading videos of real persons without consent directly leads to violations of human rights, including privacy and dignity. The article reports that these videos are already viral, indicating realized harm rather than just potential harm. Therefore, this qualifies as an AI Incident under the framework, as the AI system's misuse has directly led to harm to individuals' rights and reputations.
Thumbnail Image

24 घंटे में सोशल मीडिया से हटे डीपफेक... सर्वे में भारतीयों ने बता दी अपने मन की बात

2023-11-30
Navbharat Times
Why's our monitor labelling this an incident or hazard?
Deepfake technology is an AI system that manipulates images and voices to create realistic but fake videos. The article describes the harm caused by these AI-generated deepfakes on social media, including misinformation and reputational damage to individuals, which constitutes harm to communities and potentially violations of rights. The presence of these deepfakes and their impact is ongoing and realized, not merely potential. Therefore, this qualifies as an AI Incident. The article also mentions government responses, but the main focus is on the harm caused by the AI system's use, not just the response, so it is not Complementary Information.
Thumbnail Image

DeepFake पर सरकार कैसे कसेगी नकेल? यहां जानिए पूरी डिटेल

2023-11-27
hindi.moneycontrol.com
Why's our monitor labelling this an incident or hazard?
The presence of AI systems is clear as DeepFake videos are generated using AI techniques. The article highlights the potential harm of such content (e.g., misinformation, reputational damage) and the government's proactive measures to mitigate these risks. Since no specific harm or incident is described as having occurred, but the risk of harm is recognized and addressed, this qualifies as Complementary Information. It provides context on societal and governance responses to AI-related challenges without reporting a concrete AI Incident or AI Hazard event.
Thumbnail Image

Alia Bhatt Deepfake: आलिया बनीं नई शिकार, बेड पर बैठी लड़की का ​वीडियो वायरल

2023-11-27
News18 India
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (deep learning-based deepfake generation) to create manipulated video content that has been widely disseminated, causing harm to the individual depicted (Alia Bhatt) and potentially violating her rights. The harm is realized as the video is viral and causing distress. This fits the definition of an AI Incident because the AI system's use has directly led to harm to a person and communities through misinformation and violation of rights. The article also references similar past incidents and legal responses, reinforcing the harm caused by such AI misuse.
Thumbnail Image

Alia Bhatt के डीपफेक का शिकार होने के बाद Rashmika Mandanna ने किया रिएक्ट, बताया- 'डरावना', एनिमल एक्ट्रेस ने लड़कियों से भी की ये अपील

2023-11-28
hindi
Why's our monitor labelling this an incident or hazard?
Deepfake videos are generated using AI systems that manipulate visual content to create realistic but fake videos. The article reports that these videos have been circulated widely, directly causing harm to the actresses involved by violating their rights and potentially damaging their reputations. The involvement of AI in creating these videos is explicit and the harm is realized, not just potential. Hence, this event meets the criteria for an AI Incident as it involves the use of AI systems leading to violations of rights and harm to individuals and communities.
Thumbnail Image

Alia Bhatt Deepfake Video: रश्मिका-काजोल के बाद अब आलिया भट्ट हुईं डीपफेक वीडियो का शिकार, वायरल हुआ वीडियो

2023-11-27
hindi
Why's our monitor labelling this an incident or hazard?
Deepfake videos are generated using AI systems that manipulate visual content to create realistic but fake videos. The creation and dissemination of such videos can cause harm to the individuals depicted (harassment, reputational damage) and to communities (misinformation, violation of privacy and rights). Since the article reports that these deepfake videos have already been created and are spreading widely, causing distress and harm, this constitutes an AI Incident. The AI system's use (deepfake generation) has directly led to harm in terms of violation of rights and harm to communities.
Thumbnail Image

सिर्फ सेलिब्रिटीज ही नहीं आप भी हो सकते हैं Deepfake के शिकार, जानिए कैसे करें इससे बचाव - how to protect your photos from deepfake scam

2023-11-29
दैनिक जागरण (Dainik Jagran)
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems, specifically deepfake generation technology, which uses AI to create manipulated videos. While it references actual harms to celebrities and the potential for harm to individuals, the main focus is on educating readers about the risks and preventive measures. There is no description of a new or specific AI incident causing harm, nor a new hazard event. Therefore, this is best classified as Complementary Information, as it supports understanding and awareness of AI-related risks without reporting a new incident or hazard.
Thumbnail Image

Alia Bhatt Deepfake Video: कटरीना कैफ के बाद आलिया भट्ट हुईं डीपफेक का शिकार, एक्ट्रेस का यह वीडियो हुआ वायरल - after rashmika mandanna katrina kaif alia bhatt deepfake video goes viral

2023-11-27
दैनिक जागरण (Dainik Jagran)
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI (deepfake technology) to create manipulated video content that harms the reputation and privacy of the individual depicted. The harm here is a violation of personal rights and potentially human rights, as it involves non-consensual use of someone's likeness in explicit content, which can cause psychological harm and damage to reputation. Since the deepfake video is already viral and causing concern, the harm is realized, not just potential. Therefore, this qualifies as an AI Incident under the framework, specifically under violations of human rights or breach of obligations intended to protect fundamental rights.
Thumbnail Image

DeepFake: रश्मिका मंदाना, कैटरीना और काजोल के बाद आलिया भट्ट का वीडियो वायरल, बड़े सेलेब्स बन रहे AI का शिकार

2023-11-27
Prabhat Khabar - Hindi News
Why's our monitor labelling this an incident or hazard?
The event involves AI-generated deepfake videos, which are created using AI systems. The harm is realized as these videos are viral and affect the individuals' reputations and privacy, which falls under violations of rights and harm to communities. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's outputs.
Thumbnail Image

Alia Bhatt Deepfake Video: आलिया भट्ट भी हुईं डीपफेक वीडियो का शिकार, वायरल हुआ एक्ट्रेस का ऐसा क्लिप - Alia Bhatt Deepfake Video rashmika mandanna katrina kaif alia bhatt deepfake video went viral on social media

2023-11-27
Nai Dunia
Why's our monitor labelling this an incident or hazard?
Deepfake videos are generated using AI systems that synthesize realistic but fake video content. The viral spread of such videos involving celebrities constitutes a direct harm to their personal rights and reputations, which falls under violations of human rights or harm to communities. Since the harm is occurring (videos are viral and victims are identified), this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Alia Bhatt Deepfake Video : रश्मिका मंदाना, काजोल के बाद आलिया भट्ट का डीपफेक Video, जानें पूरा मामला

2023-11-27
NDTV Gadgets 360 Hindi
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly through the use of AI tools to create deepfake videos, which are manipulated media that misrepresent individuals. The misuse of AI here has directly led to harm by misleading viewers and potentially damaging the reputations of the individuals depicted, which qualifies as harm to communities and a violation of rights. The article also references government and public concern, confirming the significance of the harm. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

30% लोगों ने माना- इंटरनेट पर दिखने वाले कई वीडियो निकलते हैं फेक

2023-11-30
NDTV Gadgets 360 Hindi
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated deepfake videos causing harm by misleading viewers and spreading misinformation, which affects individuals' reputations and poses a threat to democracy, thus constituting harm to communities and violation of rights. The involvement of AI in creating these videos is clear, and the harm is realized, not just potential. Therefore, this qualifies as an AI Incident. The government's regulatory response and public survey are complementary information but the main focus is on the harm caused by AI deepfakes.
Thumbnail Image

कई बॉलीवुड एक्ट्रेस के वायरल हो रहे डीपफेक वीडियो, ऐसे पहचानें डिफरेंस

2023-11-27
inextlive
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems as it discusses deepfake videos, which are generated using AI technology. However, it does not describe any realized harm such as injury, rights violations, or disruption caused by these videos. Instead, it focuses on educating viewers to recognize AI-generated content, which is complementary information enhancing understanding of AI's societal impact. Therefore, it does not qualify as an AI Incident or AI Hazard but fits the category of Complementary Information.
Thumbnail Image

आप भी हो सकते हैं Deepfake के शिकार, बचाव के लिए ध्यान रखें ये बातें

2023-11-30
inextlive
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems, specifically deepfake technology, which uses AI to generate manipulated videos. The harm described (misuse of images to create fake videos) is a recognized form of violation of rights and harm to communities. However, the article does not report a specific AI incident where harm has occurred, nor does it describe a new hazard event with plausible future harm beyond the general known risk. Instead, it provides advice on how to avoid becoming a victim, which is complementary information enhancing understanding and awareness about AI-related risks.
Thumbnail Image

Alia Bhatt का डीपफेक वीडियो हुआ वायरल, फैंस ने लगाई फटकार

2023-11-27
Times Network Hindi
Why's our monitor labelling this an incident or hazard?
Deepfake videos are generated using AI systems that synthesize realistic fake videos by manipulating or generating visual content. The viral spread of such a deepfake video constitutes a violation of personal rights and can cause harm to the individual and communities by spreading misinformation and damaging reputations. Since the deepfake video is actively circulating and causing harm, this qualifies as an AI Incident under the category of violations of human rights and harm to communities.
Thumbnail Image

Alia Bhatt के डीपफेक वीडियो पर भड़कीं Animal स्टार Rashmika Mandanna, बोलीं 'ये नॉर्मल चीज नहीं...'

2023-11-28
Times Network Hindi
Why's our monitor labelling this an incident or hazard?
Deepfake videos are created using AI systems that generate manipulated realistic videos. The viral spread of such videos without consent causes harm to the individuals involved, including reputational damage and violation of privacy, which falls under violations of human rights and harm to communities. Since the event describes actual viral deepfake videos causing harm and public objection, it qualifies as an AI Incident.
Thumbnail Image

Alia Bhatt Deepfake Video: आलिया भट्ट का डीपफेक वीडियो हुआ वायरल, AI तकनीक के गलत इस्तेमाल पर भड़के लोग

2023-11-27
Times Network Hindi
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI (deepfake technology) to create and spread a manipulated video that falsely portrays a public figure engaging in inappropriate behavior. This constitutes a violation of the individual's rights and causes harm to their reputation and community trust. Since the harm is realized through the viral spread of the video, this qualifies as an AI Incident under violations of human rights and harm to communities.
Thumbnail Image

डीपफेक की खतरनाक दुनिया... - Punjab Kesari

2023-11-28
Punjab Kesari
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (artificial intelligence and machine learning software) used to create deepfake videos and voice cloning, which are being maliciously used to spread misinformation, blackmail individuals, and potentially recruit terrorists. Although it does not describe a single specific incident causing direct harm, it details ongoing harms such as cybercrime, blackmail, and misinformation dissemination linked to AI-generated deepfakes. Given that these harms are occurring and the AI system's role is pivotal, this qualifies as an AI Incident under the framework, as the AI system's use has directly or indirectly led to violations of rights and harm to communities.
Thumbnail Image

गोलमाल है भई सब गोलमाल है...डीपफेक पर क्या सरकार लगा सकती है लगाम

2023-11-29
NavBharat Times Blog
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated deepfake videos that have already caused harm to individuals by unauthorized use of their images and videos, leading to reputational damage and social harm. The use of AI in creating these deepfakes is central to the issue. The harms are direct and ongoing, including violation of rights and potential social disruption. Legal actions and government responses are mentioned but do not negate the fact that harm has occurred. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

कैटरीना कैफ और रश्मिका मंदाना के बाद डीपफेक वीडियो का शिकार हुईं आलिया भट्ट

2023-11-27
NDTVIndia
Why's our monitor labelling this an incident or hazard?
Deepfake videos are created using AI systems that generate manipulated realistic videos. The misuse of such AI-generated content to harm a person's reputation and privacy is a direct harm to the individual and community, fitting the definition of an AI Incident. The article describes the harm as occurring (viral deepfake video causing reputational damage), and the involvement of AI is explicit. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Alia Bhatt Falls Prey To Deepfake Video: रश्मिका मंदाना और कैटरीना कैफ के बाद आलिया भट्ट हुईं डीपफेक वीडियो का शिकार, आप इस तरह से कर सकते हैं असली और नकली वीडियो की पहचान | 🎥 LatestLY हिन्दी

2023-11-28
LatestLY हिन्दी
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake videos, which are created by AI systems that manipulate facial images and videos. The harm is realized as the videos damage the reputation and emotional well-being of the individuals depicted, constituting harm to persons and communities. The article reports on actual harm caused by the use of AI systems (deepfake technology) and the viral spread of such content, not just potential or future harm. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

अपना Deepfake वीडियो देखने के बाद Alia Bhatt ने उठाया बड़ा कदम

2023-11-28
News24 Hindi
Why's our monitor labelling this an incident or hazard?
The deepfake video is generated using AI techniques that manipulate visual content to create false and harmful representations. The video has been widely circulated, causing reputational and emotional harm to Alia Bhatt, which is a violation of rights and harm to the individual. The AI system's misuse directly led to this harm. Therefore, this event meets the criteria of an AI Incident due to realized harm caused by AI-generated content.
Thumbnail Image

Alia Bhatt's deepfake: अब आलिया भट्ट हुईं डीपफेक का शिकार, बोल्ड वीडियो देख फैंस का फूटा गुस्सा

2023-11-28
Good News Today
Why's our monitor labelling this an incident or hazard?
Deepfake videos are generated using AI systems that synthesize realistic but fake visual content. The article explicitly mentions deepfake videos of Alia Bhatt and others being circulated, causing harm to their reputation and personal rights. This constitutes a violation of human rights and applicable laws protecting individuals from such misuse. The harm is direct and ongoing, as the videos are viral and causing distress. Hence, this is an AI Incident due to the direct harm caused by the AI-generated content.
Thumbnail Image

Alia Bhatt Deepfake Video: रश्मिका के बाद अब आलिया भट्ट हुईं डीपफेक वीडियो का शिकार, वायरल क्लिप कर देगा हैरान - Punjab Kesari

2023-11-27
Punjab Kesari
Why's our monitor labelling this an incident or hazard?
The event explicitly involves deepfake videos, which are generated by AI systems that synthesize and manipulate images and videos. The harm is realized as the videos are viral and cause reputational and personal harm to the individuals depicted. This fits the definition of an AI Incident because the AI system's use has directly led to harm to persons (harm to reputation and privacy, a form of harm to individuals). The article describes the harm as occurring, not just potential, so it is not a hazard or complementary information. Therefore, the classification is AI Incident.