Deepfake AI Used to Create Explicit Fake Images of Bollywood Actresses, Causing Public Outcry

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

AI-powered deepfake technology was used to create and circulate explicit fake images and videos of Bollywood actresses Katrina Kaif and Rashmika Mandanna, leading to privacy violations, reputational harm, and widespread public concern. The incidents highlight the growing misuse of AI for malicious purposes and calls for regulatory action.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article describes a deepfake image of Katrina Kaif, created using AI-based deepfake technology, which is a clear example of AI misuse. The harm here is reputational and privacy-related, which falls under harm to individuals or communities. Since the deepfake image has already gone viral, the harm is realized rather than potential. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's use in creating and spreading the altered image.[AI generated]
AI principles
Privacy & data governanceRespect of human rightsSafetyRobustness & digital securityAccountabilityTransparency & explainability

Industries
Media, social platforms, and marketingArts, entertainment, and recreation

Affected stakeholders
Women

Harm types
Human or fundamental rightsReputationalPsychological

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Now Katrina Kaif is latest victim of deepfake tech, towel-clad pic from 'Tiger 3' goes viral

2023-11-07
Economic Times
Why's our monitor labelling this an incident or hazard?
The article describes a deepfake image of Katrina Kaif, created using AI-based deepfake technology, which is a clear example of AI misuse. The harm here is reputational and privacy-related, which falls under harm to individuals or communities. Since the deepfake image has already gone viral, the harm is realized rather than potential. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's use in creating and spreading the altered image.
Thumbnail Image

After Rashmika Mandanna's DEEPFAKE video goes viral, Sonnalli Seygall recalls her similar 'scary' experience; says 'My mom brought it to my notice...' | Etimes - Times of India Videos

2023-11-09
The Times of India
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly used to create deepfake videos and images that impersonate real people without consent. This misuse has directly caused harm to the celebrities by violating their rights and causing emotional distress, which fits the definition of an AI Incident under violations of human rights and harm to individuals. The viral spread of such content and the distress caused confirm realized harm rather than just potential harm.
Thumbnail Image

Katrina Kaif Becomes Latest Victim Of Deepfake As Her Morphed 'Towel' Picture Goes Viral On Internet

2023-11-07
TimesNow
Why's our monitor labelling this an incident or hazard?
The use of deepfake technology to create and disseminate manipulated videos constitutes a violation of personal rights and privacy, which falls under violations of human rights or breach of applicable laws protecting individual rights. Since the AI system's use has directly led to harm in the form of reputational damage and privacy violation, this qualifies as an AI Incident.
Thumbnail Image

After Rashmika Mandanna, Katrina Kaif falls victim to Deepfake; morphed picture of the actress goes viral | Etimes - Times of India Videos

2023-11-07
The Times of India
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system (deepfake generation) used maliciously to create and disseminate altered images causing reputational and personal harm to the actress, which qualifies as a violation of rights and harm to communities. The harm has already occurred as the image went viral and caused distress. Therefore, this is an AI Incident due to the direct harm caused by the AI-generated deepfake content.
Thumbnail Image

After Rashmika Mandanna, Katrina Kaif's Deepfake photo from 'Tiger 3' goes viral; fans call it 'very shameful act' | Hindi Movie News - Times of India

2023-11-07
The Times of India
Why's our monitor labelling this an incident or hazard?
The creation and dissemination of deepfake images using AI tools constitute a misuse of AI technology that directly harms individuals by spreading false and manipulated content. This falls under violations of rights and harm to communities due to misinformation and reputational damage. Since the harm is realized and the AI system's role is pivotal, this qualifies as an AI Incident.
Thumbnail Image

After Rashmika Mandanna Video, Katrina Kaif's Deepfake Pic From 'Tiger 3' Surfaces

2023-11-07
NDTV
Why's our monitor labelling this an incident or hazard?
The event describes the creation and circulation of AI-generated deepfake images and videos that misrepresent individuals, which constitutes a violation of rights and causes harm to communities by spreading misinformation and potentially damaging reputations. The AI system's use in generating these deepfakes is central to the harm occurring. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to harm (violation of rights and harm to communities).
Thumbnail Image

After Rashmika Mandanna, Katrina Kaif Falls Victim To Deepfake

2023-11-07
NDTV
Why's our monitor labelling this an incident or hazard?
Deepfake technology is an AI system that generates realistic but fake images or videos. The article describes the creation and circulation of deepfake content involving celebrities, which is a direct use of AI systems. The Union Minister's statement highlights the potential harm of such misinformation. Since the deepfake content has already surfaced and caused concern, this represents realized harm to individuals' reputations and potentially to communities through misinformation dissemination. Therefore, this qualifies as an AI Incident due to harm to communities and individuals through misinformation and reputational damage.
Thumbnail Image

After Rashmika Mandanna Video, Katrina Kaif's Deepfake Image From 'Tiger 3' Sparks Internet Outrage

2023-11-07
Mashable India
Why's our monitor labelling this an incident or hazard?
Deepfake technology is an AI system that generates manipulated images or videos. The creation and dissemination of a deepfake image of Katrina Kaif with inappropriate modifications directly harms the individual's rights and dignity, constituting a violation of human rights and harm to communities. The event describes realized harm through the viral spread of manipulated content and public outrage, fulfilling the criteria for an AI Incident. The government's regulatory reminder further supports the recognition of harm caused by AI misuse.
Thumbnail Image

After Rashmika, Katrina Kaif Becomes Latest Victim Of Deepfake Tech. Tiger 3 Towel Fight Scene Goes Viral

2023-11-07
TimesNow
Why's our monitor labelling this an incident or hazard?
Deepfake technology is an AI system that generates manipulated content. The viral spread of a deepfake video of a public figure without consent is a direct misuse of AI leading to harm in terms of violation of rights and reputational damage. Therefore, this event qualifies as an AI Incident due to the realized harm caused by the AI-generated manipulated content.
Thumbnail Image

Katrina Kaif's Image Goes Viral After Rashmika Mandanna, IT Ministry Issues Notice To Platforms

2023-11-08
TimesNow
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to create deepfake images that have been disseminated widely, causing reputational and personal harm to the individual depicted. This constitutes a violation of rights and harm to the community (in this case, the individual and potentially societal trust). Since the harm has already occurred due to the viral spread of the deepfake, this qualifies as an AI Incident. The involvement of authorities and experts discussing legal frameworks further supports the significance of the harm caused by AI misuse.
Thumbnail Image

Katrina Kaif Viral Photo: Morphed Images Of Tiger 3 Actress Float Online After Rashmika Mandanna's Deepfake Video; AI Regulations In Focus

2023-11-07
Jagran English
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (Deepfake AI tools) to manipulate images and videos, leading to harm in the form of identity theft and reputational damage to the actresses. This constitutes a violation of personal rights and can be considered harm to individuals. Since the harm has already occurred through the viral spread of these deepfake images and videos, this qualifies as an AI Incident. The article also discusses regulatory responses, but the primary focus is on the realized harm caused by the AI misuse.
Thumbnail Image

After Rashmika Mandanna, Katrina Kaif's deepfake goes viral

2023-11-08
The Express Tribune
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems being used to create deepfake images and videos that have been widely circulated, causing harm to the individuals depicted and raising concerns about misinformation and ethical misuse of AI. The harms include reputational damage, violation of personal rights, and social disruption due to misinformation. The involvement of AI in the creation and dissemination of these manipulated media is direct and central to the incident. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to harm to individuals and communities.
Thumbnail Image

Deepfake: After Rashmika,Katrina Kaif falls victim to nasty AI technology by morphing her infamous towel scene

2023-11-07
Asianet News Network Pvt Ltd
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI deepfake technology to create altered videos and images of celebrities, which are being widely circulated. This use of AI has directly led to harm in the form of reputational damage and violation of personal rights. The harm is realized and ongoing, not just potential. Hence, it meets the criteria for an AI Incident under violations of human rights and harm to communities through misleading AI-generated content.
Thumbnail Image

Afrer Rashmika, Katrina Kaif falls victim to Deepfake, her towel scene from Tiger 3 gets MORPHED

2023-11-07
India TV News
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (deepfake technology) to create manipulated videos that can deceive viewers and potentially harm the reputations and rights of the individuals depicted. This misuse of AI has directly led to violations of personal rights and could cause harm to the individuals and communities involved. Therefore, it qualifies as an AI Incident due to the realized harm from the malicious use of AI-generated deepfake content.
Thumbnail Image

Katrina Kaif falls victim to deepfake after Rashmika Mandanna, morphed pic of towel fight scene from Tiger 3 goes viral

2023-11-07
Daily News and Analysis (DNA) India
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of deepfake technology, an AI system, to create morphed obscene images of actresses, which have been widely shared online. This use of AI has directly caused harm by violating the actresses' rights and causing reputational damage. The harm is realized and ongoing, not merely potential. Hence, it meets the criteria for an AI Incident under violations of human rights and harm to communities.
Thumbnail Image

After Rashmika, Bollywood actress Katrina Kaif's hammam scene from 'Tiger 3' morphed

2023-11-07
GULF NEWS
Why's our monitor labelling this an incident or hazard?
The event describes the creation and dissemination of deepfake images that alter the appearance of actresses in explicit ways. Deepfake technology is an AI system that generates realistic but fake images or videos. The use of such AI-generated content to produce explicit and non-consensual images directly harms the individuals involved by violating their rights and potentially causing reputational and emotional damage. Therefore, this qualifies as an AI Incident due to the realized harm to individuals' rights and dignity through the misuse of AI technology.
Thumbnail Image

After Rashmika Mandanna, Katrina Kaif's deepfake image goes viral on social media

2023-11-08
Telangana Today
Why's our monitor labelling this an incident or hazard?
The incident explicitly involves a deepfake image, which is an AI-generated manipulated content. The spread of such images can cause harm by violating the rights of the person depicted and misleading the public. Since the harm is occurring through the use of an AI system (deepfake generation), and the harm is realized (viral spread and public concern), this qualifies as an AI Incident under the category of violations of human rights or harm to communities.
Thumbnail Image

After Rashmika Mandanna, deepfake images of Katrina Kaif, Sara Tendulkar hit internet

2023-11-09
OpIndia
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly, namely deepfake technology that uses machine learning and artificial intelligence to create manipulated images and videos. The harms described include reputational damage to individuals, misinformation, and potential identity theft, which fall under violations of rights and harm to communities. Since the AI system's use has directly led to these harms, this qualifies as an AI Incident rather than a hazard or complementary information. The government's response is mentioned but is secondary to the primary harm caused by the AI misuse.
Thumbnail Image

After Rashmika Mandanna video, Katrina Kaif's deepfake picture from 'Tiger 3' goes viral

2023-11-08
Daily Pakistan Global
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used to create deepfake images and videos, which have been widely disseminated and caused harm by distorting the actresses' appearances without consent. This constitutes a violation of rights and ethical norms, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, as public outrage and legal concerns have arisen. The Indian government's regulatory response supports the assessment of actual harm. Hence, the event is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Katrina Kaif Falls Prey To Deepfake, Morphed Towel Scene From Tiger 3 Goes Viral

2023-11-07
Free Press Journal
Why's our monitor labelling this an incident or hazard?
The article describes the creation and viral spread of a deepfake image involving an actress, which is an AI system (deepfake technology) used maliciously to produce altered content. This constitutes a violation of personal rights and can cause reputational harm, thus fitting the definition of an AI Incident due to harm to individuals and communities through misinformation and privacy violation.
Thumbnail Image

After Rashmika Mandanna, Katrina Kaif's Hammam Scene From Tiger 3 Gets Morphed Using Deepfake | 🎥 LatestLY

2023-11-07
LatestLY
Why's our monitor labelling this an incident or hazard?
The article describes the use of deepfake AI to morph images and videos of actresses into explicit content, which is a direct misuse of AI technology leading to harm in terms of violation of personal rights and potential reputational damage. The AI system's use here is malicious and results in harm to individuals, fitting the definition of an AI Incident under violations of human rights or breach of obligations protecting fundamental rights.
Thumbnail Image

Katrina Kaif's Steamy Towel Scene From Tiger 3 Gets Deepfaked After Rashmika Mandanna's Video, Where's The Technology Leading Us To?

2023-11-07
Koimoi
Why's our monitor labelling this an incident or hazard?
The event describes the use of AI deepfake technology to create synthetic media that replaces faces in images and videos, leading to objectionable and non-consensual content. This misuse of AI has directly caused harm to the individuals depicted, violating their rights and causing reputational damage. The presence of AI systems is explicit (deepfake AI tools), and the harm is realized (viral objectionable content). Therefore, this qualifies as an AI Incident under the framework, specifically under violations of human rights and harm to communities.
Thumbnail Image

After Rashmika Mandanna, Tiger 3 actress Katrina Kaif falls victim to deepfake | Viral pic

2023-11-08
WION
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI tools to create a deepfake image, which is an AI system generating manipulated content. The harm here is a violation of personal rights and potential reputational damage to the actress, which falls under violations of human rights or breach of obligations protecting fundamental rights. Since the manipulated content is already circulating widely, the harm is realized, making this an AI Incident rather than a mere hazard or complementary information.
Thumbnail Image

Deepfake crisis: The drama neither Rashmika Mandanna nor Scarlett Johansson were looking for

2023-11-10
OnManorama
Why's our monitor labelling this an incident or hazard?
Deepfake technology is an AI system that generates manipulated visual content. The article reports actual cases where deepfakes have been used maliciously against celebrities, causing harm to their personal rights and reputations. This fits the definition of an AI Incident because the AI system's use has directly led to violations of human rights and harm to individuals. The harm is realized, not just potential, and the AI system's role is pivotal in creating the manipulated content.
Thumbnail Image

Katrina Kaif's Deepfake Photo Goes VIRAL; Tiger 3 Actress' Morphed Towel Scene Surfaces On The Internet After Rashmika Mandanna's Video- Check It Out | SpotboyE

2023-11-07
spotboye.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of deepfake technology, which is an AI system capable of generating manipulated images. The circulation of such altered images can cause harm to the individual's reputation and privacy, constituting harm to the community and potentially violating personal rights. Since the manipulated content is actively spreading and causing harm, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to harm through misinformation and reputational damage.
Thumbnail Image

After Katrina Kaif And Rashmika Mandanna's Deepfake Goes Viral, Indian Govt Orders Social Sites To Remove It Within 36 Hrs | SpotboyE

2023-11-08
spotboye.com
Why's our monitor labelling this an incident or hazard?
Deepfake videos and images are generated by AI systems capable of creating realistic synthetic media. The viral spread of such content involving real individuals without consent constitutes a violation of personal rights and can cause reputational harm, misinformation, and social disruption. The government's order to remove the content shows the harm is realized and ongoing. Therefore, this event qualifies as an AI Incident due to the direct harm caused by the AI-generated deepfake content and the official response to mitigate it.
Thumbnail Image

After Rashmika, Katrina's hammam scene from 'Tiger 3' gets morphed using Deepfake

2023-11-07
Social News XYZ
Why's our monitor labelling this an incident or hazard?
The article describes the use of deepfake AI systems to manipulate images and videos of public figures, creating explicit content that is not real. This misuse of AI technology directly leads to violations of personal rights and potential harm to the individuals involved. Therefore, it qualifies as an AI Incident due to the realized harm caused by the malicious use of AI-generated content.
Thumbnail Image

Must Read! 'This has to stop, where is our privacy' netizens gets

2023-11-07
Tellychakkar.com
Why's our monitor labelling this an incident or hazard?
Deepfake technology is an AI system that generates realistic but fake images or videos by manipulating facial features. The article reports that these AI-generated deepfakes have been used inappropriately, causing harm to the celebrities involved by violating their privacy and potentially damaging their reputations. Since the harm (privacy violation and reputational harm) has already occurred due to the use of AI-generated content, this qualifies as an AI Incident under the framework, specifically a violation of human rights (privacy).
Thumbnail Image

Katrina Kaif's deepfake picture shocks internet; here's what we know - Pakistan Observer

2023-11-08
Pakistan Observer
Why's our monitor labelling this an incident or hazard?
The event describes the creation and dissemination of AI-generated deepfake content that has directly led to reputational damage and privacy violations for celebrities, which are harms under the AI Incident definition. The AI system's use in fabricating realistic but false images and videos is central to the harm. The public outrage and official responses further confirm the materialization of harm. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Adfter Rashmika Mandanna, deepfake video of Katrina Kaif's towel fight from Tiger 3 sets internet ablaze

2023-11-07
Janta Ka Reporter 2.0
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (deepfake technology) to create manipulated videos that misrepresent a person, which can be considered a violation of rights and harm to the individual's image and privacy. The harm is realized as the videos have circulated online, causing public concern and distress. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI-generated deepfake content.
Thumbnail Image

After Rashmika, Katrina's hammam scene from 'Tiger 3' gets morphed using Deepfake - Weekly Voice

2023-11-07
Weekly Voice
Why's our monitor labelling this an incident or hazard?
The article describes the use of deepfake AI to alter images of actress Katrina Kaif, creating explicit content that did not originally exist. This use of AI directly leads to harm by violating the individual's rights and potentially causing reputational damage and emotional distress. The AI system's use here is malicious and results in a breach of fundamental rights, fitting the definition of an AI Incident.
Thumbnail Image

After Rashmika, Katrina Kaif Becomes the new Victim of Deepfake Scammers

2023-11-07
technosports.co.in
Why's our monitor labelling this an incident or hazard?
The event describes the malicious use of AI-based deepfake technology to create and spread fake videos and images of public figures, leading to reputational harm and potential violation of rights. The AI system's use directly led to harm (reputational damage and misinformation), fulfilling the criteria for an AI Incident. The article also highlights the need for legal and regulatory frameworks to address such harms, reinforcing the recognition of actual harm caused by AI misuse.
Thumbnail Image

Katrina Kaif targeted after Rashmika Mandanna, deepfake image surfaces from Tiger 3 towel scene

2023-11-07
NEWS9LIVE
Why's our monitor labelling this an incident or hazard?
The event clearly involves AI systems, specifically deepfake AI technology used to manipulate images and videos. While the deepfakes have been circulated and have caused public concern, the article does not describe any direct or indirect harm that has already occurred, such as physical injury, legal rights violations, or significant community harm. Instead, it highlights the potential for harm and the need for regulatory responses. Therefore, this event fits the definition of an AI Hazard, as the use of deepfake AI technology could plausibly lead to harms like misinformation, reputational damage, or other social harms if unchecked, but no concrete incident of harm is reported yet.
Thumbnail Image

Rashmika Mandanna डीपफेक वीडियो केस में दर्ज हुई FIR,

2023-11-11
E24 Bollywood
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (deepfake technology) used to generate a manipulated video causing harm to Rashmika Mandanna. The harm is realized and direct, as the video has caused emotional distress and reputational damage to the actress. The police registration of an FIR and legal action further confirm the recognition of harm. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's use.
Thumbnail Image

TOI संपादकीय: रश्मिका मंदाना का डीपफेक वीडियो, AI ने हमें कितना लाचार बना दिया!

2023-11-08
Navbharat Times
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems generating deepfake videos and voice content that have caused real harm, such as financial fraud and threats to democracy and personal autonomy. The harms are direct consequences of AI misuse. The mention of regulatory and technological challenges supports the seriousness of these harms. Therefore, this event fits the definition of an AI Incident due to realized harms caused by AI-generated deepfakes.
Thumbnail Image

Opinion: मंदाना के साथ जो हुआ वो तो कुछ भी नहीं, चौतरफा हाहाकार मचा सकती है डीपफेक टेक्नॉलजी

2023-11-09
Navbharat Times
Why's our monitor labelling this an incident or hazard?
The article explicitly centers on deepfake technology, which is an AI system using deep learning to create synthetic media. It details how this technology has already caused harm to individuals (e.g., Rasmika Mandana's case), threatens marginalized groups disproportionately, and poses risks to democratic processes and social cohesion. These harms fall under violations of rights and harm to communities. The discussion of legal and regulatory gaps further confirms the AI system's role in causing or enabling these harms. Therefore, this event qualifies as an AI Incident because the AI system's use has directly or indirectly led to significant harms.
Thumbnail Image

फेक और डीपफेक में क्‍या अंतर, कौन-सा है ज्‍यादा खतरनाक? क्‍या है इसका सही इस्‍तेमाल

2023-11-08
News18 India
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems (deep learning, generative adversarial networks) used to create deepfake content. It discusses the plausible harms that such AI-generated content can cause, including social harm and reputational damage, which fall under potential violations of rights and harm to communities. However, since no specific harmful event or incident is described as having occurred, and the focus is on explaining the technology, its risks, and potential uses, this fits the definition of an AI Hazard. It warns about plausible future harms from AI misuse but does not document a realized AI Incident. It also does not primarily focus on responses or updates to past incidents, so it is not Complementary Information.
Thumbnail Image

कैसे बनाए जाते हैं डीपफेक वीडियो, किसी के शरीर पर कैसे फिट हो जाता है दूसरा चेहरा?

2023-11-09
News18 India
Why's our monitor labelling this an incident or hazard?
The article describes the AI technology (deep learning and GANs) used to create deepfake videos, which can plausibly lead to harms such as misinformation, privacy violations, and reputational damage. However, it does not describe a particular event where harm has occurred or a near miss. Instead, it provides an overview of the technology and legal responses, which fits the definition of Complementary Information as it enhances understanding of AI-related risks and governance without reporting a new incident or hazard.
Thumbnail Image

DeepFake AI टेक्नोलॉजी क्या है? | FYI

2023-11-10
hindi
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (deepfake technology) used to create manipulated video content that falsely represents a person, leading to harm such as violation of personal rights and potential reputational damage. The AI system's use directly led to this harm, fulfilling the criteria for an AI Incident under violations of human rights or breach of obligations protecting fundamental rights.
Thumbnail Image

रश्मिका मंदाना के डीपफेक वीडियो मामले में दर्ज हुई FIR, दिल्ली पुलिस ने शुरू की जांच

2023-11-10
News24 Hindi
Why's our monitor labelling this an incident or hazard?
The use of AI technology to create deepfake videos constitutes the involvement of an AI system. The deepfake videos have caused harm by violating the individual's rights and potentially damaging reputation, which falls under violations of human rights and legal protections. The registration of an FIR and police investigation indicates that harm has occurred and is being addressed. Therefore, this event qualifies as an AI Incident due to realized harm stemming from the AI system's misuse.
Thumbnail Image

Rashmika Mandanna की डीपफेक वीडियो पर रूमर्ड बॉयफ्रेंड Vijay Deverakonda ने की थी पोस्ट, अब एक्ट्रेस ने किया रिएक्ट, लिखा-सहमत हूं

2023-11-10
hindi
Why's our monitor labelling this an incident or hazard?
The event describes a deepfake video (an AI system-generated manipulated video) that was circulated and caused harm to the actress's reputation and privacy. The harm is direct and realized, as the video was viral and led to public and legal responses. The AI system's use (deepfake generation) directly led to the harm. Hence, it meets the criteria for an AI Incident due to violation of rights and harm to the individual caused by AI misuse.
Thumbnail Image

Delhi: एक्ट्रेस रश्मिका मंदाना के डीपफेक वीडियो पर एक्शन में DCW, स्वाति मालीवाल ने दिल्ली पुलिस को भेजा नोटिस

2023-11-10
hindi
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (deepfake technology) used to create manipulated video content that harms the individual depicted (Rashmika Mandana). The harm includes violation of rights and emotional distress, which are recognized harms under the AI Incident definition. The DCW's legal action and public concern confirm that harm has occurred. Therefore, this is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

रश्मिका मंदाना के डीपफेक वीडियो मामले में दिल्ली पुलिस ने दर्ज की FIR

2023-11-10
hindi
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions AI-generated deepfake videos, which are created using AI systems capable of generating realistic fake content. The harm involves violation of personal rights and potential psychological and reputational damage to the individual depicted. The police FIR and investigation confirm that the AI system's misuse has directly led to a legal case addressing these harms. Therefore, this qualifies as an AI Incident due to realized harm caused by the AI system's malicious use.
Thumbnail Image

डीपफेक क्या है, कैसे बनता है, कैसे पहचानें

2023-11-08
LallanTop - News with most viral and Social Sharing Indian content on the web in Hindi
Why's our monitor labelling this an incident or hazard?
Deepfake videos are created using AI systems that generate realistic but fake video content. The mention of celebrities being victims implies actual harm has occurred through the use of these AI systems. Therefore, this event involves the use of AI systems leading to harm to individuals and communities through misinformation and reputational damage, fitting the definition of an AI Incident.
Thumbnail Image

कैसे करें Deepfake वीडियोज की पहचान, कैसे रहे सुरक्षित; ये टिप्स कर सकते हैं मदद

2023-11-08
Hindustan
Why's our monitor labelling this an incident or hazard?
The article centers on the risks posed by AI-generated deepfake videos and the government's regulatory response, but it does not describe a particular event where harm has occurred due to an AI system. It serves as an informative piece raising awareness and advising caution, which fits the definition of Complementary Information. There is no direct or indirect harm reported from a specific AI system's development, use, or malfunction, nor is there a plausible imminent harm event described. Therefore, the classification is Complementary Information.
Thumbnail Image

रश्मिका मंदाना के 'डीपफेक VIDEO' पर दिल्ली पुलिस को नोटिस, DCW ने लिया संज्ञान; ऐक्शन की मांग

2023-11-10
Hindustan
Why's our monitor labelling this an incident or hazard?
A deepfake video is created using AI technology to manipulate images or videos, which in this case has been used to produce a fake video of a public figure without consent. This constitutes a violation of rights and causes harm to the individual involved. The event describes realized harm due to the AI system's use (deepfake generation) and the legal and social response to it. Therefore, it qualifies as an AI Incident under the category of violations of human rights or breach of obligations intended to protect fundamental rights.
Thumbnail Image

डीसीडब्ल्यू ने रश्मिका मंदाना के 'डीपफेक' वीडियो मामले में दिल्ली पुलिस से मांगा जवाब

2023-11-10
ThePrint Hindi
Why's our monitor labelling this an incident or hazard?
The event describes the creation and dissemination of a deepfake video using AI technology, which has directly led to harm to the individual depicted (Rashmika Mandanna) through unauthorized manipulation and distribution of her image. This constitutes a violation of rights and is a clear AI Incident as per the definitions. The involvement of AI in creating the deepfake is explicit, and the harm is realized, not just potential. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

रश्मिका मंदाना का Deepfake Video मामले में सरकार का कड़ा एक्शन...सोशल मीडिया प्लेटफॉर्म को जारी की एडवाइजरी - Government strict action in Rashmika Mandanna Deepfake Video case advisory issued to social media platforms

2023-11-07
दैनिक जागरण (Dainik Jagran)
Why's our monitor labelling this an incident or hazard?
The deepfake video is an AI-generated manipulated content that has already been disseminated on social media, causing harm to the individual's reputation and potentially misleading the public. This constitutes harm to communities and violation of rights, fitting the definition of an AI Incident. The government's advisory to social media platforms to remove such content promptly is a response to this incident, aiming to mitigate ongoing harm. Since the article focuses on the government's advisory and legal framework in response to an existing harmful AI-generated deepfake, the event is best classified as an AI Incident with complementary governance actions.
Thumbnail Image

Rashmika Mandanna के डीपफेक वीडियो मामले के बाद सरकार का कड़ा रुख, शिकायत के 24 घंटे के अंदर हटाना होगा कंटेंट - Rashmika Mandanna Deepfake Controversy: Ministry of Electronics and Information Technolgy new advisory for social media platforms

2023-11-08
दैनिक जागरण (Dainik Jagran)
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in the form of deepfake video generation, which can cause harm such as misinformation and violation of personal rights. However, the article focuses on the government's regulatory and legal response to such harms, including advisories and penalties, rather than describing a new incident or hazard. Therefore, it fits the definition of Complementary Information as it provides context and governance response to an AI Incident (the deepfake video) rather than reporting a new incident or hazard itself.
Thumbnail Image

रोकना होगा तकनीक का दुरुपयोग डीपफेक सरीखी तकनीकों से निपटने के लिए भारत में एक प्रभावी ढांचे की जरूरत - effective framework is needed in India to deal with technologies like deepfake

2023-11-10
दैनिक जागरण (Dainik Jagran)
Why's our monitor labelling this an incident or hazard?
The event involves the use and misuse of AI systems (deepfake technology) that have directly led to harms including misinformation, violation of privacy, and reputational damage to individuals, which fall under violations of human rights and harm to communities. The article also details government and platform responses to these harms. Since the harms are realized and the AI system's role is pivotal, this qualifies as an AI Incident rather than a hazard or complementary information. The focus is on the harms caused and the need for regulatory frameworks, not just on general AI developments or responses alone.
Thumbnail Image

क्या है Deep fake, जिससे बदला रश्‍मिका मंदाना का चेहरा, कितना खतरनाक है और कैसे बचें इस fake टेक्‍नोलॉजी से?

2023-11-09
Webdunia
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (deepfake technology) that has directly led to harm—specifically reputational harm to Rashmika Mandana through a fake video. This fits the definition of an AI Incident because the AI system's use caused a violation of personal rights and harm to an individual. The article also discusses the broader risks and challenges posed by deepfake AI, but the primary focus is on the realized harm from the fake video.
Thumbnail Image

रश्मिका मंदाना का Deepfake video मामले में सरकार का एक्शन...

2023-11-07
Clipper28 Digital Media
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system (deepfake technology) used to generate manipulated video content. The viral spread of this deepfake has caused harm to the individual featured (Rashmika Mandanna) and represents a violation of rights and potential harm to communities through misinformation. The government's advisory and legal references indicate recognition of this harm and steps to mitigate it. Therefore, this qualifies as an AI Incident because the AI-generated content has directly led to harm and legal concerns.
Thumbnail Image

रश्मिका मंदाना के Deepfake वीडियो मामले में एक्शन में पुलिस, FIR दर्ज

2023-11-11
आज तक
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (deepfake AI-generated video) used maliciously to create and spread manipulated content. This use of AI has directly caused harm to the actress by violating her rights and privacy, which falls under violations of human rights or breach of applicable laws protecting fundamental rights. The police action and FIR registration confirm that harm has occurred and is being addressed. Therefore, this qualifies as an AI Incident.
Thumbnail Image

Explainer: रश्मिका मंदाना की बात तो छोड़िए, Deepfake बॉलीवुड पोर्न से भरा पड़ा है इंटरनेट

2023-11-08
News24 Hindi
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (generative AI and face-swap tools) to create deepfake pornographic content without consent, which directly harms the privacy, dignity, and rights of individuals (notably women and celebrities). The harm is ongoing and widespread, with significant social and personal consequences. The article details the use and misuse of AI tools leading to violations of rights and harm to communities, meeting the criteria for an AI Incident. It is not merely a potential risk or a complementary update but a report of actual harm caused by AI misuse.
Thumbnail Image

Rashmika Mandanna Deepfake: केंद्र ने जारी की एडवाइजरी, गलत सूचनाएं हटाने का निर्देश

2023-11-08
BQ Prime Hindi
Why's our monitor labelling this an incident or hazard?
The advisory references deepfake content, which is typically generated by AI systems, and misinformation that can harm individuals or communities. The government's directive to social media platforms to detect and remove such content is a response to these harms. Since the article focuses on the advisory and instructions rather than describing a specific AI incident or hazard event causing or plausibly leading to harm, it fits the definition of Complementary Information.
Thumbnail Image

Deepfake: अगर कोई बना दे आपका डीपफेक वीडियो, तो ऐसे मिलेगी कानून से मदद, ये हैं प्रावधान

2023-11-07
आज तक
Why's our monitor labelling this an incident or hazard?
The article centers on the harms caused by AI-generated deepfake videos, which are a form of AI system output that can directly lead to violations of privacy, defamation, cybercrime, copyright infringement, and other harms. It details existing legal frameworks that address these harms, indicating that such harms have occurred or are occurring. Therefore, the event involves an AI system's use leading to direct harm, fitting the definition of an AI Incident rather than a hazard or complementary information. The article is not merely about AI technology or policy responses but about harms caused by AI deepfakes and legal recourse, thus qualifying as an AI Incident.
Thumbnail Image

Rashmika Mandanna Deepfake Video: कांग्रेस चाहती है तकनीकी चुनौतियों से निपटने के लिए कानूनी ढांचा | 🇮🇳 LatestLY हिन्दी

2023-11-07
LatestLY हिन्दी
Why's our monitor labelling this an incident or hazard?
The article centers on the potential risks posed by deepfake AI technology and the call for legal and regulatory responses to mitigate these risks. While it references a viral deepfake video, it does not describe a direct or realized harm caused by the AI system's use or malfunction. Instead, it emphasizes the plausible future harms and the necessity for preventive measures. Therefore, this event qualifies as an AI Hazard, as it concerns the credible risk of harm from AI-generated deepfakes and the need for regulatory action to prevent incidents.
Thumbnail Image

Deepfake: AI brings a big new menace

2023-11-12
Deccan Herald
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly used to create deepfake content that has already circulated widely, causing harm to individuals' privacy and potentially to societal trust through misinformation. This fits the definition of an AI Incident because the AI's use has directly led to harm to individuals and communities through privacy violations and misinformation dissemination.
Thumbnail Image

All about deepfake tech: AI-powered videos intensify debate on disinformation epidemic

2023-11-11
Economic Times
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly through the creation and dissemination of deepfake videos, which are synthetic media generated by AI. The misuse of these AI systems has directly led to harms including misinformation spread, violation of individuals' rights (e.g., unauthorized use of actors' likenesses), and societal harm through disinformation campaigns. The registration of an FIR and government advisories confirm that harm has occurred and is ongoing. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to harm to communities and violations of rights.
Thumbnail Image

IT Act needs stronger provisions to curb deepfake menace: Experts

2023-11-14
Economic Times
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems, specifically deepfake generation technologies, which create manipulated images and videos. The harms described include psychological harm to individuals, violations of rights, and potential financial and reputational damage to businesses, all of which have already occurred or are ongoing. The discussion of existing laws being insufficient and the need for stronger regulation indicates that harms are materializing and that current responses are inadequate. Therefore, this event qualifies as an AI Incident because the use of AI-generated deepfakes has directly led to harm, and the article focuses on these realized harms and the need for legal and policy interventions.
Thumbnail Image

All about deepfake tech: AI-powered videos intensify debate on disinformation epidemic | India News - Times of India

2023-11-11
The Times of India
Why's our monitor labelling this an incident or hazard?
The article centers on the risks posed by deepfake AI technology and the societal challenge of disinformation but does not describe a concrete event where harm has already occurred due to deepfakes. It emphasizes the plausible future harm from misuse of deepfakes and the need for detection and education, which aligns with the definition of an AI Hazard rather than an AI Incident or Complementary Information. There is no mention of a specific AI Incident or a governance response, nor is it unrelated to AI. Therefore, it is best classified as an AI Hazard.
Thumbnail Image

All about deepfake tech: AI-powered videos intensify debate on disinformation epidemic

2023-11-12
MoneyControl
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (deepfake technology) used to create synthetic videos that spread disinformation, which is a form of harm to communities and potentially violates rights. The registration of an FIR and government intervention indicate that harm has materialized. The article describes actual misuse of AI systems leading to harm, not just potential or future risks. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Leading ladies or digital duplicates?

2023-11-14
Hindustan Times
Why's our monitor labelling this an incident or hazard?
The article centers on the risks and legal issues posed by AI-generated deepfake videos, which are a form of synthetic media created using AI systems. However, it does not describe a particular event where a deepfake has directly or indirectly caused harm, nor does it report a near miss or plausible future harm from a specific AI system. Instead, it outlines general concerns and legal arguments related to deepfakes as a category of AI-generated content. Therefore, it fits best as Complementary Information, providing context and understanding about AI-related risks and governance rather than documenting a concrete AI Incident or AI Hazard.
Thumbnail Image

All about deepfake tech: AI-powered videos intensify debate on disinformation epidemic

2023-11-11
The New Indian Express
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems generating deepfake videos that have been used to spread disinformation, which is a form of harm to communities. The registration of an FIR against unidentified persons for creating such deepfakes indicates that harm has occurred. The government's advisory to social media companies to remove such content further confirms the recognition of harm. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm through misinformation and manipulation of public opinion.
Thumbnail Image

All about deepfake tech: AI-powered videos intensify debate on disinformation epidemic

2023-11-11
ThePrint
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI-generated deepfake videos being used to spread misinformation and manipulate public opinion, which constitutes harm to communities. The registration of an FIR against unidentified persons for creating such deepfakes and government advisories to social media platforms to remove such content indicate that harm has materialized and is being addressed. The AI system's use in generating these videos is central to the harm, fulfilling the criteria for an AI Incident.
Thumbnail Image

Bollywood Has A Deepfake Problem

2023-11-11
Analytics India Magazine
Why's our monitor labelling this an incident or hazard?
Deepfake technology is an AI system that generates synthetic media. The article describes both authorized and unauthorized uses, with unauthorized deepfakes causing harm to actors' rights and identities, which is a violation of human rights and intellectual property rights. The legal case of Anil Kapoor demonstrates realized harm and legal recognition of such harm. The viral unauthorized deepfakes further indicate ongoing harm. Hence, the event meets the criteria for an AI Incident due to direct or indirect harm caused by AI system use.
Thumbnail Image

Deadly imitation: Editorial on deepfake technology and the risks posed by 'frontier AI'

2023-11-12
The Telegraph
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (deep learning-based generative AI creating deepfakes) that have been used to produce harmful content targeting individuals, leading to financial scams and reputational harm, which are direct harms to persons and communities. It also discusses the violation of rights and public safety risks, fulfilling the criteria for an AI Incident. The mention of regulatory responses and global pacts supports the seriousness of these harms. Although it includes some discussion of potential future risks, the presence of actual incidents and harms takes precedence, classifying this as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

All about deepfake tech: AI-powered videos intensify debate on disinformation epidemic | Technology

2023-11-11
Devdiscourse
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly, namely deepfake technology that generates synthetic videos. The misuse of these AI-generated deepfakes has directly led to harm by spreading misinformation and disinformation, which can manipulate public opinion and disrupt social trust, thus harming communities. The registration of an FIR and government advisories indicate that harm has materialized and is being addressed. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to harm.
Thumbnail Image

Explained | What is deepfake?

2023-11-10
OnManorama
Why's our monitor labelling this an incident or hazard?
The article discusses AI systems (deep learning-based synthetic media generation) and their misuse leading to harms such as misinformation, fraud, and reputational damage, which qualify as AI Incidents when they occur. However, the article mainly summarizes known issues, past incidents, and potential risks without focusing on a new or specific event causing harm or a new hazard. It also includes government advisory actions, which are governance responses. Therefore, the content fits best as Complementary Information, providing important context and updates on AI-related harms and societal responses rather than reporting a distinct AI Incident or AI Hazard.
Thumbnail Image

All about deepfake tech: AI-powered videos intensify debate on disinformation epidemic

2023-11-11
Press Trust of India
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems generating deepfake videos, which have been used to spread misinformation and manipulate public opinion, constituting harm to communities. The police FIR and government advisory show that harm has occurred and is being addressed. The AI system's use directly led to these harms, meeting the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

All about deepfake tech: AI-powered videos intensify debate on disinformation epidemic

2023-11-11
DT next
Why's our monitor labelling this an incident or hazard?
Deepfake videos are generated by AI systems that synthesize realistic but fake video content. The article describes actual incidents of such videos being created and disseminated, leading to misinformation and manipulation, which harms communities by spreading false information and undermining trust. The police registration of an FIR and government advisories show that harm has materialized and is being addressed. Therefore, this qualifies as an AI Incident due to realized harm caused by AI-generated disinformation.
Thumbnail Image

All about deepfake tech: AI-powered videos intensify debate on disinformation epidemic

2023-11-11
The Economic Times
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated deepfake videos that have been used to spread disinformation, which is a form of harm to communities and a violation of rights. The registration of an FIR shows that harm has materialized and is being addressed legally. The AI system's use in generating these videos directly leads to the harm described. Although the article also discusses detection and mitigation strategies, the primary focus is on the realized harm caused by AI-generated deepfakes. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

IT Act needs stronger provisions to curb deepfake menace: Experts

2023-11-14
The Economic Times
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems, specifically deepfake generation technology, which is an AI system capable of creating manipulated videos. The harms described include psychological harm to individuals, violations of rights, and potential business harms, which fall under the definitions of AI Incident harms (a), (c), and (d). The viral deepfake video of a public figure and the discussion of its impact indicate that harm has already occurred. The article also discusses regulatory and policy responses, but the main focus is on the harms caused by AI-generated deepfakes and the insufficiency of current laws to address them. Therefore, this event qualifies as an AI Incident rather than merely a hazard or complementary information.
Thumbnail Image

Deepfakes: How Did It Originate And What Can You Do? | BOOM

2023-11-10
BOOMLive
Why's our monitor labelling this an incident or hazard?
The article clearly describes an AI system (deepfake technology using GANs) that has been used to create harmful synthetic media, including non-consensual explicit videos and misinformation, which constitute violations of rights and harm to communities. The Ministry's advisory and legal references indicate that harm has occurred and is ongoing. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to realized harms. The article also discusses mitigation and detection efforts, but the main narrative centers on the harms caused by deepfakes and their societal impact, not just complementary information or potential hazards.
Thumbnail Image

All about deepfake tech: AI-powered videos intensify debate on disinformation epidemic - OrissaPOST

2023-11-11
Odisha News, Odisha Latest news, Odisha Daily - OrissaPOST
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems in the form of deepfake technology, which is used to create synthetic videos that have been misused to spread disinformation, a harm to communities and potentially a violation of rights. The FIR registration indicates an AI Incident has occurred due to misuse of AI-generated content. However, the article primarily serves as an informative piece about the technology, its risks, and detection methods, rather than reporting a new specific incident or hazard event. Therefore, it fits best as Complementary Information, providing context and societal response to ongoing AI-related harms rather than describing a new AI Incident or AI Hazard.
Thumbnail Image

Navigating Perils of Deepfake Technology

2023-11-11
Jammu Kashmir Latest News | Tourism | Breaking News J&K
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (deepfake technology) that directly led to harm to individuals (distress, privacy violation) and potential harm to communities through misinformation and manipulation. The deepfake video is a concrete example of AI misuse causing realized harm, fitting the definition of an AI Incident. The article also highlights the need for legal and technological measures but the primary focus is on the actual harm caused by the deepfake video incident.
Thumbnail Image

Deepfakes: The Growing Concern with AI Technology Artificial intelligence is making the creation and spread of deepfake content easier, posing a threat to

2023-11-13
Bollyinside - Breaking & latest News worldwide
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems as it centers on AI-generated deepfakes, which are a known AI application. However, it does not report a realized harm or incident but rather warns about the growing difficulty in detecting deepfakes and the potential for future misuse leading to harm such as misinformation and manipulation. Therefore, it fits the definition of an AI Hazard, as it plausibly could lead to harm but no specific harm event is described.
Thumbnail Image

Celeb deepfakes just the tip, revenge porn, fraud & threat to polls form underbelly of AI misuse

2023-11-13
ThePrint
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI systems (deep learning-based deepfake generation) being used to produce harmful content that has caused direct harm to individuals (e.g., reputational harm, privacy breaches, financial fraud) and communities (e.g., misinformation affecting elections). The harms are realized and ongoing, with law enforcement actions such as FIRs filed. The AI system's use is central to these harms, fulfilling the criteria for an AI Incident. The article also notes the absence of adequate legal frameworks, but the harms are already occurring, so this is not merely a hazard or complementary information. Hence, the classification is AI Incident.
Thumbnail Image

Fake Bollywood video highlights AI worries in India

2023-11-07
The Straits Times
Why's our monitor labelling this an incident or hazard?
The article describes a deepfake video created using AI that falsely depicts a Bollywood actor in a compromising way, leading to emotional harm and public outrage. The AI system's misuse has directly caused harm to the individuals involved and has broader implications for societal harm due to misinformation and sectarian tensions. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm to persons and communities.
Thumbnail Image

After Rashmika Mandanna's fake video, Katrina Kaif's morphed picture goes viral

2023-11-07
mint
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions AI tools used to morph and replace faces in images and videos, constituting an AI system. The manipulated content caused reputational damage and emotional harm to the individuals depicted, fulfilling the criteria for harm to persons and communities. The widespread circulation of non-consensual deepfake content is a direct consequence of AI misuse. The regulatory and platform responses further confirm the recognition of harm caused. Therefore, this qualifies as an AI Incident due to realized harm stemming from AI system misuse.
Thumbnail Image

Fake Bollywood video highlights AI worries in India

2023-11-07
Daily Tribune
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (deepfake technology) used to create manipulated videos that have caused harm to individuals (reputational damage, emotional distress) and have broader societal implications (inciting outrage, potential sectarian tensions). The harm has already occurred, fulfilling the criteria for an AI Incident. The use of AI-generated deepfakes for non-consensual pornography and misinformation is a recognized form of violation of rights and harm to communities. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Fake Bollywood video highlights AI worries in India

2023-11-08
SpaceWar
Why's our monitor labelling this an incident or hazard?
The article describes a deepfake video created using AI technology that manipulates a person's face onto another's body without consent, causing reputational damage and emotional harm. This fits the definition of an AI Incident because the AI system's use directly led to harm to individuals (Mandanna and Patel) and communities (through spreading misinformation and inciting tensions). The harm is realized, not just potential, and involves violations of rights and harm to communities. Therefore, this event is classified as an AI Incident.
Thumbnail Image

India-entertainment-technology-women-AI

2023-11-07
nampa.org
Why's our monitor labelling this an incident or hazard?
The article describes a deepfake video created using AI technology that falsely depicts a public figure in a compromising way, leading to emotional harm and reputational damage. The AI system's use in generating manipulated content that is widely disseminated constitutes a violation of rights and harm to the individual and community trust. Therefore, this qualifies as an AI Incident due to realized harm caused by the AI system's outputs.
Thumbnail Image

Fake video of Bollywood actress highlights AI worries in India

2023-11-07
The New Paper
Why's our monitor labelling this an incident or hazard?
The article describes a deepfake video created using AI technology that falsely depicts a Bollywood actress in a compromising manner. The video has been widely circulated, causing emotional harm to the actress and distress to others involved. The use of AI to create manipulated videos that spread misinformation and non-consensual content is a direct cause of harm, fitting the definition of an AI Incident. The harm includes violation of personal rights and reputational damage, as well as broader societal harm through misinformation and potential incitement of social tensions.
Thumbnail Image

Congress asks Maharashtra govt to prepare legal, regulatory framework to deal with deepfakes | Politics

2023-11-11
Devdiscourse
Why's our monitor labelling this an incident or hazard?
The article discusses the problem of AI-generated deepfakes and the political demand for regulatory action, which is a governance and societal response to a recognized AI-related risk. There is no description of a specific AI Incident (harm realized) or AI Hazard (plausible future harm event) occurring in this report. The focus is on the need for frameworks and identification mechanisms to prevent or mitigate harm from deepfakes. Therefore, this qualifies as Complementary Information, as it provides context and response to AI-related risks without reporting a new incident or hazard.
Thumbnail Image

Congress asks Maharashtra govt to prepare legal, regulatory framework to deal with deepfakes

2023-11-11
Economic Times
Why's our monitor labelling this an incident or hazard?
The article centers on political and social concerns about deepfakes and the need for regulatory responses. While it references harms caused by deepfakes (which are AI-generated content), it does not describe a specific incident with direct or indirect harm newly occurring in this report. Nor does it describe a specific event that plausibly could lead to harm but has not yet. Instead, it highlights calls for legal frameworks and identification mechanisms, which are governance responses. Therefore, this is best classified as Complementary Information, as it provides context and societal response to AI-related harms rather than reporting a new AI Incident or AI Hazard.
Thumbnail Image

Moneycontrol Daily: Your Essential 7

2023-11-12
MoneyControl
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (deepfake generation) and addresses the potential harms they could cause, but it does not describe any realized harm or a specific incident involving AI misuse or malfunction. Instead, it focuses on the need for regulatory measures and identification mechanisms, which classifies it as complementary information about societal and governance responses to AI risks.
Thumbnail Image

Maharashtra Congress urges Shinde govt. to set up committee to fight deepfakes

2023-11-11
The Hindu
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (deepfake generation) that has been used to create misleading content causing reputational and social harm. However, the article focuses on the political response and the call for regulatory frameworks rather than describing a concrete AI Incident with direct or indirect harm that has materialized beyond reputational damage and public concern. The potential for harm (e.g., chaos in politics, law and order issues) is noted but not confirmed as having occurred. Therefore, this is best classified as Complementary Information, as it provides context on societal and governance responses to AI-related risks rather than reporting a new AI Incident or AI Hazard.
Thumbnail Image

The dark shadow of deepfakes: Essential to develop robust detection mechanisms, legal frameworks

2023-11-14
The Financial Express
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems (GANs) generating deepfake content that has caused real harm, such as non-consensual deepfake creation leading to harassment and privacy violations, and reputational damage to individuals. The example of the actress's deepfake video causing harm despite being debunked confirms realized harm. The involvement of AI in generating the harmful content is direct and pivotal. The article also discusses the challenges in detection and regulation but focuses primarily on the harms already occurring. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Maharashtra: Congress asks state govt to prepare legal, regulatory framework to deal with deepfakes

2023-11-11
mid-day
Why's our monitor labelling this an incident or hazard?
The article involves AI systems (deepfake generation) that have caused harm (reputational damage, social disruption) through misuse. The deepfake video has already circulated, indicating realized harm to individuals and potential harm to communities and public order. However, the article primarily reports a political demand for regulatory action and preventive frameworks, not a new incident or hazard event. Therefore, it is best classified as Complementary Information, as it provides context and governance response to an existing AI-related harm rather than describing a new AI Incident or AI Hazard.
Thumbnail Image

Congress asks Maharashtra govt to prepare legal, regulatory framework to deal with deepfakes

2023-11-11
Press Trust of India
Why's our monitor labelling this an incident or hazard?
The article mentions the circulation of a deepfake video, which is an AI-generated manipulated media, but does not report any specific realized harm such as injury, rights violations, or disruption caused by the deepfake. The focus is on the call for regulatory measures to prevent or manage such harms in the future. Therefore, this event represents a plausible risk scenario related to AI deepfakes but does not describe an actual incident of harm. It is best classified as Complementary Information because it provides context on societal and governance responses to AI-related risks rather than reporting a new AI Incident or Hazard.
Thumbnail Image

Congress asks Maharashtra govt to prepare legal, regulatory framework to deal with deepfakes

2023-11-11
The Economic Times
Why's our monitor labelling this an incident or hazard?
Deepfakes are AI-generated synthetic media that can be used maliciously to create false and harmful content. The article highlights realized harms such as defamation and threats to social order caused by deepfakes, which are AI systems. However, the article focuses on the call for regulatory action rather than detailing a specific incident or harm event. Since the harms are ongoing and the AI system's role in causing harm is direct, this qualifies as an AI Incident. The request for legal framework is a response to these harms but the main focus is on the harms caused by deepfakes, which are AI-generated content causing harm to individuals and communities.
Thumbnail Image

Deepfakes row: Cong asks Maha govt to prepare legal framework - The Shillong Times

2023-11-12
The Shillong Times
Why's our monitor labelling this an incident or hazard?
The article discusses the potential harms of AI-generated deepfakes and the need for regulatory responses, which is a governance and societal response to AI-related risks. The mention of a circulating deepfake video indicates an AI-related event but does not detail realized harm or a specific incident causing harm. Therefore, this is best classified as Complementary Information, as it provides context and response to AI risks rather than describing a new AI Incident or AI Hazard.