Surge of Deepfake AI Videos Targets Indian Actresses, Sparks Government Action

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

AI-generated deepfake videos featuring Indian actresses like Kajol, Rashmika Mandanna, and Katrina Kaif have gone viral, causing reputational harm and raising privacy concerns. The Indian government has urged social media platforms to remove such content, warning of penalties, while police and cybercrime units investigate the incidents.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves the use of AI systems (deepfake technology) to create manipulated videos that falsely depict individuals, causing harm to their reputation and privacy. The videos have been widely shared, misleading the public and prompting legal action, indicating realized harm. This fits the definition of an AI Incident as the AI system's use has directly led to harm (violation of rights and harm to communities).[AI generated]
AI principles
Privacy & data governanceRespect of human rightsAccountabilityRobustness & digital securityTransparency & explainabilitySafety

Industries
Media, social platforms, and marketingDigital securityGovernment, security, and defenceArts, entertainment, and recreation

Affected stakeholders
Women

Harm types
ReputationalHuman or fundamental rightsPsychological

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

After Rashmika Mandanna, Kajol's Deepfake Video Of Changing Dress On Camera Goes Viral

2023-11-16
Free Press Journal
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (deepfake technology) to create manipulated videos that falsely depict individuals, causing harm to their reputation and privacy. The videos have been widely shared, misleading the public and prompting legal action, indicating realized harm. This fits the definition of an AI Incident as the AI system's use has directly led to harm (violation of rights and harm to communities).
Thumbnail Image

Deepfake Alert! Viral Clip Shows Kajol Changing Clothes After Rashmika Mandanna's Video Controversy

2023-11-16
TimesNow
Why's our monitor labelling this an incident or hazard?
The event describes the creation and dissemination of a deepfake video targeting a celebrity, which is an AI system's use leading to harm through misinformation and violation of personal rights. Since the video is already viral, the harm is occurring, qualifying this as an AI Incident under violations of rights and harm to communities.
Thumbnail Image

Kajol becomes latest victim of deefake technology; PM Modi addresses 'problematic' trend

2023-11-17
Economic Times
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (deepfake technology) used to create manipulated videos that have been disseminated, causing harm to the celebrities' reputations and privacy. The harm is realized as the videos are viral and have prompted government and public concern. The involvement of AI in the creation and spread of these videos directly leads to violations of rights and harm to communities through misinformation and manipulation. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

After deepfake videos of Rashmika Mandanna, Katrina Kaif and Kajol go viral, PM Narendra Modi expresses concern; says 'A new crisis is emerging...' | Etimes - Times of India Videos

2023-11-18
The Times of India
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly through the creation and dissemination of deepfake videos, which are AI-generated synthetic media. The harm includes violations of personal rights and reputational damage to the individuals depicted, as well as potential harm to communities through misinformation and erosion of trust. Since the deepfakes are already circulating and causing harm, this qualifies as an AI Incident. The Prime Minister's concern and call for public education further underscore the recognized impact of these AI-generated harms.
Thumbnail Image

Deepfake Video Row: PM Modi says deepfakes one of the biggest threats - Times of India Videos

2023-11-17
The Times of India
Why's our monitor labelling this an incident or hazard?
The article discusses the potential risks and societal harm that deepfake AI technology could cause, such as misinformation and damage to individuals' reputations and public trust. Since no specific harm has been reported as having occurred yet, and the focus is on the plausible future threat and the need for awareness, this qualifies as an AI Hazard. It is not an AI Incident because no direct or indirect harm has been described as having happened. It is not Complementary Information because the main focus is on the hazard itself, not on updates or responses to a past incident. It is related to AI systems as deepfake technology involves AI-based synthetic media generation.
Thumbnail Image

More Fake Videos Emerges | How To Combat 'Deepfake' ? | Top News

2023-11-18
TimesNow
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to create deepfake videos, which are realistic but fabricated digital content. The article indicates that these deepfakes have the potential to spread misinformation and cause chaos, which constitutes harm to communities. Since the harm is ongoing or realized through the circulation of these videos, this qualifies as an AI Incident under the framework, as the AI system's use has directly or indirectly led to harm to communities through misinformation.
Thumbnail Image

PM Modi on 'DEEPFAKE': Calls it One of the Biggest Threats, Dark Side of AI | Rashmika Mandanna Row - Times of India Videos

2023-11-18
The Times of India
Why's our monitor labelling this an incident or hazard?
Deepfake videos are created using AI systems that synthesize realistic but fake media content. The viral spread of such videos featuring public figures like Rashmika Mandanna and others directly harms the individuals involved by misrepresenting them and potentially misleading the public. PM Modi's statement highlights the recognized threat and harm caused by these AI-generated manipulations. The event describes actual harm occurring through the use of AI, meeting the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Deepfake Video of Kajol Changing Clothes Goes Viral After Rashmika Mandanna's Video Controversy - WATCH | 🎥 LatestLY

2023-11-16
LatestLY
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly through the use of deepfake technology to create fabricated videos. The harm includes violations of privacy and potential reputational damage to individuals, which falls under violations of human rights or breach of obligations protecting fundamental rights. Since the deepfake videos are actively circulating and causing concern, the harm is realized rather than merely potential. Therefore, this qualifies as an AI Incident due to the direct involvement of AI in causing harm through fabricated content dissemination.
Thumbnail Image

After Rashmika Mandanna and Katrina Kaif, Kajol's deepfake video stirs up a storm | Etimes - Times of India Videos

2023-11-16
The Times of India
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used to generate deepfake videos, which have directly led to harm in the form of reputational damage and distress to the celebrities depicted. The malicious use of AI to create and spread misleading and indecent content constitutes a violation of rights and harm to individuals and communities. The government's advisory and public concern further highlight the significance of the harm caused. Therefore, this qualifies as an AI Incident due to the realized harm caused by the AI-generated deepfake videos.
Thumbnail Image

After Rashmika Mandanna And Katrina Kaif, Kajol's Deepfake Video Surfaces Online

2023-11-17
NDTV
Why's our monitor labelling this an incident or hazard?
The event describes the creation and circulation of AI-generated deepfake videos that falsely depict celebrities, which constitutes a violation of rights and harm to communities by spreading misinformation and potentially damaging reputations. The AI system's use in generating these videos is central to the harm caused. Since the harm is realized and legal actions are underway, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Now Deepfake Video Of PM Modi Singing Garba Song: "Big Concern"

2023-11-17
NDTV
Why's our monitor labelling this an incident or hazard?
The article focuses on the risks and concerns related to AI-generated deepfake videos and the government's response, including legal advisories and calls for responsible use of technology. There is no description of a concrete AI Incident where harm has materialized, nor a specific AI Hazard event causing plausible future harm in a particular case. The main content is about societal and governance responses to the broader issue of AI misuse, making this a case of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

Explained: How ChatGPT Algorithm Is Used to Make Deepfakes

2023-11-17
NDTV
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (generative models and ChatGPT) to create deepfake videos, which have already been produced and circulated, causing harm such as misinformation and potential violation of individuals' rights. This constitutes an AI Incident because the AI's use has directly led to harm through the creation and dissemination of deceptive content. The article also discusses the need for responses but the primary focus is on the realized misuse and harm from AI-generated deepfakes.
Thumbnail Image

After Rashmika Mandanna, Deepfake Video of Kajol Changing Her Dress On Camera Goes Viral - News18

2023-11-17
News18
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used to create deepfake videos, which are AI-generated manipulated content. The videos have been widely circulated, causing reputational harm and privacy violations to the celebrities involved, which are harms to individuals and communities. The involvement of AI in the creation and dissemination of these videos directly leads to these harms. Hence, this is an AI Incident as per the definitions provided.
Thumbnail Image

Indian Idol 1 Runner-Up Accuses Channel Of Cheating | Emraan On SRK, Salman | Kajol's Deepfake Video - News18

2023-11-17
News18
Why's our monitor labelling this an incident or hazard?
The presence of a deepfake video implies the use of AI systems for generating synthetic media. However, the article only reports the existence and virality of the deepfake without describing any harm such as reputational damage, misinformation, or rights violations. There is no indication of harm occurring or plausible harm that could arise from this event as described. Therefore, this is best classified as Complementary Information, providing context about AI-generated content in the entertainment domain without a specific AI Incident or Hazard.
Thumbnail Image

'I watched a video in which I was doing garba': India PM Modi warns about deepfakes

2023-11-18
The Independent
Why's our monitor labelling this an incident or hazard?
Deepfake technology is an AI system that generates realistic but fake videos. The article details actual instances where deepfakes have been used to create misleading videos of public figures, causing reputational harm and emotional distress, which are harms to individuals and communities. The Indian Prime Minister's warning and the celebrities' reactions confirm that harm has materialized. Hence, this event meets the criteria for an AI Incident due to realized harm caused by the use of AI-generated deepfakes.
Thumbnail Image

Bengaluru Police Launches Helpline To Combat Deepfake Menace: Here's All Details

2023-11-18
India News, Breaking News, Entertainment News | India.com
Why's our monitor labelling this an incident or hazard?
Deepfake technology is explicitly described as AI-based synthetic media creation using deep learning algorithms. The article highlights actual harm caused by deepfakes, such as viral videos of celebrities and political figures, which have led to public concern and potential reputational damage. The police's launch of a helpline to report and address these incidents indicates that harm has occurred and is ongoing. Therefore, the event involves the use of AI systems leading directly to harm (misinformation, manipulation, reputational harm), fitting the definition of an AI Incident.
Thumbnail Image

PM Modi's Deepfake Video Singing Garba Raises Serious Concern On Misuse Of AI

2023-11-17
India News, Breaking News, Entertainment News | India.com
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (deepfake generation) and discusses their misuse to create misleading videos. Although no specific incident of harm is reported as having occurred, the potential for harm is clearly articulated, including misinformation and privacy violations. The article focuses on the risk and societal concern about deepfakes, legal obligations for platforms, and calls for vigilance, which aligns with the definition of an AI Hazard. There is no description of a realized harm or incident, so it is not an AI Incident. It is not merely complementary information because the main focus is on the risk and concern about misuse of AI deepfakes, not on responses or updates to past incidents. Hence, AI Hazard is the appropriate classification.
Thumbnail Image

Kajol's Deepfake Video Changing Clothes Goes Viral Amid Rashmika's Video Controversy

2023-11-16
India News, Breaking News, Entertainment News | India.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used to create deepfake videos, which are manipulated media generated by AI techniques. The harm is realized as the videos have circulated widely, causing emotional distress and violation of personal rights (a form of human rights violation). The involvement of AI is direct in the creation and dissemination of these manipulated videos. The government's advisory further confirms the recognition of harm caused by these AI-generated deepfakes. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's use.
Thumbnail Image

PM Modi Raises Concerns Over Deepfakes, Highlights Potential Misuse Of AI

2023-11-17
Zee News
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI systems (deepfake technology) being used to create manipulated videos that have already caused reputational harm and social outrage, which constitutes harm to communities and individuals. The misuse of AI in this way has directly led to harm, fulfilling the criteria for an AI Incident. The governmental call for action further supports the recognition of realized harm rather than just potential harm.
Thumbnail Image

Kajol falls prey to deep fake after Rashmika - Telugu News - IndiaGlitz.com

2023-11-18
IndiaGlitz.com
Why's our monitor labelling this an incident or hazard?
Deepfake videos are generated by AI systems that manipulate images and videos to create false content. The article explicitly mentions AI-generated deepfakes causing distress to celebrities, which constitutes harm to individuals' rights and reputations. The harm is realized, not just potential, as the videos have circulated widely and caused public outrage and distress. Hence, this qualifies as an AI Incident due to the direct harm caused by the AI system's outputs (deepfake videos).
Thumbnail Image

Kajol's deepfake video goes viral after Rashmika Mandanna, Katrina Kaif

2023-11-17
India Today
Why's our monitor labelling this an incident or hazard?
Deepfake technology is an AI system that generates manipulated videos by replacing faces. The viral spread of such videos directly harms the individuals involved by violating their privacy and potentially damaging their reputation, which is a violation of rights and harm to communities. The article reports that these videos are already circulating widely, indicating realized harm rather than just potential. Therefore, this event meets the criteria for an AI Incident due to the direct involvement of AI-generated deepfake content causing harm.
Thumbnail Image

PM Modi Raises Concerns About Deepfake Videos After Finding A Clip Of Him Singing A Garba Song; 'Technology...'

2023-11-17
Mashable India
Why's our monitor labelling this an incident or hazard?
The article discusses the existence and circulation of AI-generated deepfake videos, which are a known risk for misinformation and reputational harm. While the PM found a deepfake video of himself, the article does not describe any realized harm such as injury, rights violations, or disruption caused by these videos. The main focus is on raising awareness and calling for responsible use and media education. Therefore, this event fits the definition of an AI Hazard, as the misuse of AI deepfake technology could plausibly lead to harm, but no specific AI Incident is reported here.
Thumbnail Image

After Rashmika Mandanna, Kajol's Deepfake Video Goes Viral; Gets Morphed In A GRWM Video

2023-11-17
Mashable India
Why's our monitor labelling this an incident or hazard?
The event describes the malicious use of AI deepfake technology to create and spread non-consensual morphed videos of public figures, which constitutes a violation of their rights and causes harm to their reputation and privacy. The AI system's use is central to the harm occurring, fulfilling the criteria for an AI Incident under violations of human rights and harm to communities. The involvement of legal actions further confirms the recognition of harm caused by the AI system's misuse.
Thumbnail Image

After Katrina Kaif & Rashmika Mandanna, Kajol Devgan Becomes Target Of Deepfake Video

2023-11-17
english
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated deepfake videos that have been shared widely on social media, targeting celebrities like Kajol Devgan, Rashmika Mandanna, and Katrina Kaif. These deepfakes involve AI systems that manipulate video content to replace faces, which constitutes the use of AI systems. The harms include violation of privacy, reputational harm, and potential misinformation spread, which fall under harm to communities and violations of rights. Since the harm is occurring (videos are circulating and causing concern), this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

'Saw A Video Where I Was Singing': PM Modi Voices Concern Over Deepfake

2023-11-17
TimesNow
Why's our monitor labelling this an incident or hazard?
Deepfake videos are created using AI systems that generate realistic but fake content. The widespread sharing of such videos of public figures like actors Kajol, Rashmika Mandana, and Katrina Kaif indicates actual dissemination of misleading content, which harms communities by spreading misinformation and potentially damaging reputations. PM Modi's remarks highlight the challenge and harm posed by these AI-generated deepfakes. Therefore, this event qualifies as an AI Incident due to the direct harm caused by AI-generated content.
Thumbnail Image

Top News: After Rashmika, Katrina, Kajol's Deepfake GRWM Video Rattles The Internet | English News

2023-11-17
TimesNow
Why's our monitor labelling this an incident or hazard?
The article describes a deepfake video created using AI technology that falsely depicts Kajol in a compromising scenario. This constitutes a violation of personal rights and can cause harm to the individual and communities by spreading misleading and harmful content. The AI system's use here directly leads to harm through malicious use of AI-generated content, fitting the definition of an AI Incident.
Thumbnail Image

No End To Deepfake Menace | How Safe Are We From Deepfakes In Real Life? | Newshour Agenda

2023-11-17
TimesNow
Why's our monitor labelling this an incident or hazard?
The presence of deepfake videos indicates the involvement of AI systems generating manipulated content. While such deepfakes can cause harm to individuals' reputations and potentially mislead the public, the article does not report a specific realized harm or incident resulting from these deepfakes. Instead, it focuses on the discussion and awareness of the issue, which aligns with providing complementary information about AI-related risks and societal responses rather than documenting a concrete AI incident or hazard.
Thumbnail Image

What Are Deep Fakes And Why They Have Raised Alarm; How To Spot Fake Videos | Explained

2023-11-17
Jagran English
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the use of AI systems (deepfake technology) to create manipulated videos that have caused harm by violating privacy and spreading misinformation. The harms described align with violations of human rights and harm to communities. Since these harms are occurring (e.g., viral deepfake videos of public figures), this qualifies as an AI Incident. The article also discusses legal responses and awareness measures, but the primary focus is on the realized harms caused by AI-generated deepfakes.
Thumbnail Image

Bengaluru Police Launches Helpline To Tackle Deepfake Menace

2023-11-18
Jagran English
Why's our monitor labelling this an incident or hazard?
Deepfake technology is an AI system capable of generating realistic fake videos, which can cause harm such as reputational damage, misinformation, and violation of privacy and rights. The article describes the existence of viral deepfake videos causing public concern and the police's response to this harm. Since the deepfake videos have already circulated and caused harm, this qualifies as an AI Incident involving the use of AI systems leading to harm to communities and individuals. The police helpline is a response to this incident but does not itself constitute a new incident or hazard.
Thumbnail Image

Kajol's Video Changing Outfit Goes Viral After Rashmika Mandanna | Deepfake Row

2023-11-16
Jagran English
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (deepfake AI technology) used to create fabricated videos that have been widely disseminated, causing harm to the privacy and rights of the individuals depicted. This constitutes a violation of personal rights and harms communities by spreading misinformation and fake content. Since the harm is occurring (videos are viral and causing uproar), this qualifies as an AI Incident. The mention of YouTube's policy is complementary information but does not change the primary classification.
Thumbnail Image

Deepfake Concern: Govt To Meet Google, Meta, Other Social Media Firms To Address AI Crisis; Here's What IT Minister Said

2023-11-19
Jagran English
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating deepfake content that has already caused harm by spreading false and manipulated videos of public figures, which can be considered harm to communities and individuals' rights. The AI system's use (deepfake generation) has directly led to this harm. The government's response and planned meeting are complementary information but the core event is the ongoing harm caused by AI deepfakes. Therefore, this qualifies as an AI Incident due to realized harm from AI-generated deepfake content.
Thumbnail Image

YouTube To Penalise Content Creators Who Do Not Mention Use Of Deepfake In Videos

2023-11-15
Jagran English
Why's our monitor labelling this an incident or hazard?
The event involves the use and misuse of AI systems (deepfake generative AI) that have already caused harm by spreading misleading synthetic videos, which can harm individuals' reputations and communities. YouTube's new policy and enforcement represent a governance and societal response to an existing AI Incident (the viral deepfake video). Since the main focus is on the platform's response and policy implementation rather than the incident itself, this qualifies as Complementary Information rather than a new AI Incident or Hazard.
Thumbnail Image

Fact-Check: Trending video featuring Kajol changing clothes exposed as deepfake; read details

2023-11-16
Asianet News Network Pvt Ltd
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in the form of deepfake technology used to manipulate videos, which directly leads to harm by violating the actresses' rights and spreading misinformation, thus harming communities and individuals. The Indian government's advisory to social media platforms to remove such content further confirms the recognition of harm caused. Therefore, this qualifies as an AI Incident due to realized harm from the use of AI-generated deepfake content.
Thumbnail Image

PM Narendra Modi says deepfake a 'big concern'; urges ChatGPT to give warning

2023-11-17
Asianet News Network Pvt Ltd
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated deepfake videos causing harm by spreading misinformation and violating individuals' rights, with real examples of harmful content already circulating. The involvement of AI systems in creating these deepfakes is clear, and the harm to communities and individuals is occurring. The government's response and legal penalties further confirm the recognition of actual harm. Hence, this is an AI Incident rather than a hazard or complementary information, as harm is realized and ongoing.
Thumbnail Image

After Rashmika Mandanna, deepfake video of Kajol changing clothes emerges on social media

2023-11-16
India TV News
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly through the use of deepfake technology, which is an AI system that generates manipulated video content by superimposing faces. The misuse of this AI system has directly led to harm in the form of identity theft, violation of personal rights, and psychological harm to the individuals impersonated. The viral spread of such content on social media platforms demonstrates realized harm to communities and individuals. The government's legal advisory further confirms the recognition of these harms under applicable law. Hence, the event meets the criteria for an AI Incident as the AI system's use has directly led to violations of rights and harm to individuals.
Thumbnail Image

PM Modi flags misuse of AI to create 'deep fake', says 'media must educate people'

2023-11-17
India TV News
Why's our monitor labelling this an incident or hazard?
Deep fake videos are generated using AI systems that manipulate visual content to create realistic but fabricated videos. The circulation of such videos harms individuals' reputations and can mislead communities, constituting harm to communities and violations of rights. The article reports that these deep fakes have already surfaced and gone viral, indicating realized harm. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI-generated deep fakes.
Thumbnail Image

Rashmika Mandanna, Katrina Kaif, now Kajol: Why are we seeing sudden uptick in deepfake videos

2023-11-17
Firstpost
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used to create deepfake videos, which have directly led to harm in the form of misinformation and reputational damage to individuals, fulfilling the criteria for harm to communities and violation of rights. The deepfakes are actively circulating and causing harm, not just a potential risk. Therefore, this qualifies as an AI Incident due to the realized harm caused by the AI-generated manipulated content.
Thumbnail Image

Deepfake menace deepens as another Bollywood actress gets targeted after Rashmika, Katrina

2023-11-16
Telangana Today
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (deepfake generation technology) used maliciously to create and spread harmful content, which constitutes a violation of rights and harm to individuals and communities. The harm is realized as the videos have gone viral and caused outcry, indicating actual harm rather than just potential. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI-generated deepfake content.
Thumbnail Image

After Rashmika Mandanna & Katrina Kaif, Kajol becomes victim of deepfake, her GRWM video stirs the internet

2023-11-17
Firstpost
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake technology used to create manipulated videos that harm the reputation and privacy of the actress Kajol. The misuse of AI in this way leads to violations of personal rights and emotional harm, fitting the definition of an AI Incident under violations of human rights or breach of obligations intended to protect fundamental rights. The harm is realized as the video is circulating online and causing distress, not merely a potential risk. Hence, it is classified as an AI Incident.
Thumbnail Image

PM Modi cautions public against deepfakes, has a stern warning for AI companies

2023-11-17
Firstpost
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI systems (deepfake generation and detection tools) and their misuse causing harm through misinformation and reputational damage, which fits the definition of AI Incident harm. However, the main focus is on the Prime Minister's warnings, government directives, and legal measures to combat the problem, rather than reporting a new incident or hazard event. The mention of past deepfake controversies provides context but is not the central event. Thus, the article primarily provides complementary information about societal and governance responses to an ongoing AI-related harm issue, fitting the Complementary Information category.
Thumbnail Image

'Saw a video of me singing,' says PM Modi while flagging issues with deepfakes and AI

2023-11-17
cnbctv18.com
Why's our monitor labelling this an incident or hazard?
The deepfake video is an AI-generated manipulated video that has been circulated, causing reputational harm and misinformation risks. This constitutes harm to communities and individuals through deceptive content, fulfilling the criteria for an AI Incident. The government's advisory is a response to this realized harm, not the primary focus of the article, which centers on the incident of the deepfake circulation itself.
Thumbnail Image

After Rashmika Mandanna, Katrina Kaif and Sara Tendulkar a deepfake video of Kajol goes viral

2023-11-16
cnbctv18.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to create deepfake videos, which are AI-generated synthetic media. The malicious use of these AI systems has directly led to harm in terms of violation of privacy and reputational damage to the individuals targeted. The widespread sharing of such content on social media platforms also harms communities by spreading misinformation and potentially causing social disruption. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to harm.
Thumbnail Image

Bengaluru Police launch helpline to deal with Deepfake menace

2023-11-18
The Hans India
Why's our monitor labelling this an incident or hazard?
Deepfake technology involves AI systems that generate synthetic media, which can be used maliciously to deceive and manipulate people, constituting a plausible risk of harm to individuals and communities. The police helpline is a response to this threat, aiming to mitigate potential harms from deepfake misuse. Since no specific harm event is described but the potential for harm is clearly recognized and addressed, this qualifies as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

'New Crisis Is Emerging': PM Modi Raises Alarm on AI-Generated Deepfakes

2023-11-17
TheQuint
Why's our monitor labelling this an incident or hazard?
The creation and viral dissemination of an AI-generated deepfake video constitutes a direct harm to the individual depicted, including potential violations of privacy and reputational harm, which falls under harm to communities and individuals. The AI system's use (deep learning for deepfake generation) directly led to this harm. Therefore, this qualifies as an AI Incident. The advisory issued is a response to this incident, but the main event is the harm caused by the AI-generated deepfake.
Thumbnail Image

PM Modi Issues Warning on Deepfakes, Urges Responsible AI Use

2023-11-17
MySmartPrice.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (deepfake technology) that have been used maliciously, causing psychological harm and misinformation, which are recognized harms under the AI Incident definition. The government's imposition of penalties and warnings indicates that harm has occurred or is ongoing. Therefore, this qualifies as an AI Incident because the misuse of AI deepfake systems has directly led to harm to individuals and communities, including emotional distress and misinformation dissemination.
Thumbnail Image

Deepfake video of Kajol emerges following Rashmika Mandanna and Katrina Kaif

2023-11-17
english.madhyamam.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used to create deepfake videos that manipulate visual content by replacing faces, which is a clear AI system involvement. The use of these AI-generated deepfakes has directly led to harm by misleading the public, violating the rights of the celebrities involved, and causing reputational damage. The legal response and public concern further confirm the materialization of harm. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to violations of rights and harm to individuals and communities through misinformation and manipulation.
Thumbnail Image

Deepfake Menace: Bengaluru Police Launch Helpline to Deal With Digital Deception | 📰 LatestLY

2023-11-18
LatestLY
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (deepfake technology) that have been used to create misleading content causing harm to individuals' reputations and public trust, which fits the definition of AI-related harm. However, the article focuses on the police launching a helpline and raising awareness rather than describing a new or specific AI incident or hazard. This is a societal/governance response to an ongoing AI-related issue, enhancing understanding and mitigation efforts. Therefore, it qualifies as Complementary Information rather than an AI Incident or Hazard.
Thumbnail Image

PM Modi Sounds Alarm On Deepfakes, Urges Vigilance

2023-11-17
BW Businessworld
Why's our monitor labelling this an incident or hazard?
The article focuses on the potential and ongoing risk posed by AI-generated deepfakes to misinformation and societal trust, which aligns with the definition of an AI Hazard. There is no direct evidence of harm having occurred from the described deepfake videos, only concern and calls for action. Therefore, this event is best classified as an AI Hazard due to the plausible future harm from misuse of AI deepfake technology.
Thumbnail Image

Deepfakes: 'Sword Of Damocles' Hanging Over India

2023-11-18
BW Businessworld
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems, specifically generative AI used to create deepfake videos. The misuse of these AI systems has directly led to harms such as misinformation, reputational damage, and potential violations of privacy and rights of individuals depicted in the deepfakes. These harms affect communities and individuals, fulfilling the criteria for harm to communities and violations of rights under the AI Incident definition. The article describes actual incidents of harm caused by AI-generated deepfakes, not just potential risks, and discusses legal obligations to address these harms. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

B'luru Police launch helpline to deal with Deepfake menace

2023-11-18
Social News XYZ
Why's our monitor labelling this an incident or hazard?
Deepfake technology is explicitly described as AI-based synthetic media created using deep learning algorithms. The article references actual viral deepfake videos causing public concern, indicating realized harm related to misinformation and digital deception, which can be considered harm to communities. The police helpline is a response to this ongoing issue. Since the article focuses on the existing problem of deepfake misuse causing harm and the police's response, this qualifies as Complementary Information rather than a new AI Incident or AI Hazard. The article does not report a new incident of harm caused by AI but rather the societal/governance response to an existing AI-related harm.
Thumbnail Image

Rashmika Mandanna, Katrina Kaif, Kajol & More: 5 Celebrities Who Have Become Victims Of The Deepfake Technology

2023-11-16
Koimoi
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (deepfake technology) to create manipulated videos that have directly led to harm in the form of reputational damage and distress to multiple celebrities. The article describes actual occurrences of these deepfake videos being circulated and the negative impact on the victims, fulfilling the criteria for an AI Incident. The involvement of AI in the creation of these videos and the resulting harm to individuals' rights and reputations is clear and direct.
Thumbnail Image

PM Modi Raises Alarm On AI Misuse For Deepfake Videos, Calls For Responsible Use Of Technology

2023-11-17
Swarajyamag
Why's our monitor labelling this an incident or hazard?
The article centers on the recognition of AI misuse risks (deepfake videos) and the government's legal and policy responses, including advisories and penalties. It does not describe a concrete AI Incident (harm realized) or a specific AI Hazard (a particular event with plausible future harm). Rather, it provides complementary information about ongoing societal and governance efforts to manage AI-related risks. Therefore, it fits the definition of Complementary Information, as it enhances understanding of AI misuse issues and responses without reporting a new incident or hazard.
Thumbnail Image

'Deepfake problematic': PM Modi raises concerns over misuse of AI

2023-11-17
NewsBytes
Why's our monitor labelling this an incident or hazard?
The article describes the creation and viral spread of a deepfake video generated using AI, which is a misuse of AI technology. While this misuse can lead to significant harm such as reputational damage and misinformation, the article does not specify any realized harm or legal violations resulting from this event. The Prime Minister's remarks and the viral nature of the video indicate a credible potential for harm, making this an AI Hazard rather than an AI Incident. There is no indication that this is merely complementary information or unrelated news, as the focus is on the misuse of AI and its risks.
Thumbnail Image

WHAT?! Kajol's Deepfake Video Of Changing Dress On Camera Goes Viral! New Fake Clip Leaves Netizens Fuming-DETAILS INSIDE | SpotboyE

2023-11-16
spotboye.com
Why's our monitor labelling this an incident or hazard?
Deepfake videos are generated by AI systems that manipulate visual content to create realistic but fake videos. The malicious use of such AI-generated deepfakes to harm individuals' reputations and spread false information fits the definition of an AI Incident, as it directly leads to harm to communities and violations of personal rights. The article reports that these videos have already gone viral, indicating realized harm rather than a potential future risk.
Thumbnail Image

Kajol becomes latest victim of deefake technology; PM Modi addresses 'problematic' trend

2023-11-17
The Economic Times
Why's our monitor labelling this an incident or hazard?
Deepfake technology is an AI system that generates manipulated videos. The morphed videos of celebrities and the Prime Minister have been circulated on social media, causing reputational harm and privacy violations, which are forms of harm to individuals and communities. The event describes actual harm occurring due to the AI system's use (misuse), meeting the criteria for an AI Incident. The government's response and video removal are complementary but do not negate the incident classification.
Thumbnail Image

After Rashmika and Katrina, now Kajol's deepfake video goes viral

2023-11-17
Social News XYZ
Why's our monitor labelling this an incident or hazard?
The use of AI-based deepfake technology to create manipulated videos that falsely depict individuals can cause harm to the reputation and privacy of the persons involved, constituting a violation of rights. Since the videos are already circulating and causing concern, the harm is realized rather than potential. Therefore, this qualifies as an AI Incident due to the direct involvement of AI systems in creating harmful manipulated content affecting individuals' rights and reputations.
Thumbnail Image

PM Modi speaks on deepfake days after Rashmika Mandanna's video went viral, says misuse of AI

2023-11-17
APN Live
Why's our monitor labelling this an incident or hazard?
Deepfake videos involve AI systems that generate manipulated content, which can cause harm to individuals' reputations and misinform the public, constituting harm to communities. The viral circulation of these videos indicates realized harm. The Prime Minister's remarks emphasize the misuse of AI and the need for public awareness, confirming the AI system's role in causing harm. Therefore, this event qualifies as an AI Incident due to the direct harm caused by AI-generated deepfake videos.
Thumbnail Image

Video Purporting To Show Kajol Changing Outfit On Camera Is A Deepfake | BOOM

2023-11-15
BOOMLive
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake technology, which is an AI system used to create fabricated videos by morphing faces. The misuse of this AI system has directly led to harm in the form of misinformation, violation of privacy, and non-consensual imagery, which are violations of human rights and cause harm to individuals and communities. Therefore, this qualifies as an AI Incident under the framework because the AI system's use has directly led to harm.
Thumbnail Image

Pathetic! After Rashmika Mandanna, a deepfake video of Kajol goes

2023-11-16
Tellychakkar.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used to create deepfake videos, which are AI-generated manipulated media. The misuse of these AI systems has directly led to harm in the form of identity theft, violation of privacy, and potential reputational damage to the individuals impersonated. The advisory and legal references further confirm the recognition of harm caused by these AI-generated deepfakes. Hence, this is an AI Incident due to realized harm from AI misuse.
Thumbnail Image

What! From Rashmika Mandanna to Kajol, here are 5 celebs who have

2023-11-19
Tellychakkar.com
Why's our monitor labelling this an incident or hazard?
Deepfake technology is an AI system that generates manipulated visual content by morphing faces and bodies. The event details actual instances where deepfake videos and images have been used maliciously against celebrities, causing harm to their privacy and reputation, which constitutes a violation of rights. The involvement of AI in creating these deepfakes and the resulting harm meets the criteria for an AI Incident, as the AI system's use has directly led to harm to individuals' rights and reputations.
Thumbnail Image

Deepfake video of Bollywood actress Kajol surfaces online

2023-11-16
KalingaTV
Why's our monitor labelling this an incident or hazard?
The deepfake videos directly involve AI systems used to create manipulated content that harms the reputation and rights of the individuals depicted, fulfilling the criteria for an AI Incident under violations of human rights or breach of obligations protecting fundamental rights. The harm is realized as the videos have gone viral and caused concern. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

After Rashmika, deepfake video of Kajol changing clothes emerges on social media

2023-11-16
en.etemaaddaily.com
Why's our monitor labelling this an incident or hazard?
The video is explicitly described as a deepfake, which is an AI-generated manipulated video. The content is misleading and fabricated, targeting a specific individual, thus constituting a violation of rights and harm to the person depicted. Since the AI system's use directly leads to this harm, this qualifies as an AI Incident under the framework's definition of harm to rights and communities.
Thumbnail Image

After Rashmika and Katrina, now Kajol's deepfake video goes viral - Weekly Voice

2023-11-17
Weekly Voice
Why's our monitor labelling this an incident or hazard?
The article describes the creation and viral spread of deepfake videos where AI is used to morph the faces of actresses onto other people's bodies. This involves the use of AI systems for generating manipulated content. The harm here is a violation of personal rights and potential reputational damage, which falls under violations of human rights or breach of obligations protecting fundamental rights. Since the harm is realized and ongoing (videos have gone viral), this qualifies as an AI Incident.
Thumbnail Image

After Rashmika Mandanna, Kajol's deepfake video goes viral online

2023-11-17
Ananya Bhattacharya
Why's our monitor labelling this an incident or hazard?
Deepfake videos are generated using AI systems that manipulate visual content to create realistic but fake videos. The viral spread of such videos directly harms the individuals depicted by violating their privacy and potentially damaging their reputation, which falls under violations of human rights and harm to communities. The involvement of the Indian government issuing directives further supports the recognition of actual harm. Hence, this event meets the criteria for an AI Incident.
Thumbnail Image

Kajol Deepfake: After Rashmika Mandanna, video showing Kajol changing clothes in front of camera goes viral on internet

2023-11-17
NewsroomPost
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used to create deepfake videos, which are AI-generated manipulated media. The videos have been widely disseminated, causing harm to the actresses' reputations and privacy, which constitutes a violation of rights under applicable law. The incident has led to legal action and public condemnation, confirming that harm has materialized. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's use in creating and spreading deepfake content.
Thumbnail Image

After Katrina Kaif and Rashmika Mandanna, Kajol becomes the victim of deepfake video

2023-11-17
Bollywood Bubble
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used to create deepfake videos that impersonate individuals without consent, leading to privacy violations and reputational harm. The deepfake videos have been disseminated online, causing concern and prompting legal action, which confirms that harm has materialized. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to violations of rights and harm to individuals' privacy and dignity.
Thumbnail Image

Kajol Latest Victim Of Deepfake Trend Following Viral Videos Of Katrina Kaif And Rashmika Mandanna - Woman's era

2023-11-17
womansera.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (deepfake technology) used to create manipulated videos that falsely depict celebrities, leading to harm such as identity theft, reputational damage, and emotional distress. The harm is realized as the videos are circulating online and have caused concern among the affected individuals and communities. This fits the definition of an AI Incident because the AI system's use has directly led to violations of personal rights and harm to communities. The article does not merely discuss potential harm or responses but reports on actual misuse and its consequences.
Thumbnail Image

PM Modi Deepfake Video: PM Narendra Modi Warns About AI After Rashmika Mandanna's Viral Deepfake Video

2023-11-18
jagrantv
Why's our monitor labelling this an incident or hazard?
Deepfake videos are generated using AI systems that manipulate images and videos to create realistic but fake content. The viral spread of such videos involving celebrities constitutes a direct harm to their privacy and reputation, fitting the definition of an AI Incident due to violation of rights and harm to communities. The Prime Minister's remarks highlight the societal impact and ongoing misuse of AI technology in this context.
Thumbnail Image

Deepfake video: After Rashmika Mandanna, Kajol becomes the target

2023-11-16
PagalParrot
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used to generate deepfake videos, which are AI-generated synthetic media that manipulate visual content to impersonate individuals. The deepfakes have been widely disseminated, causing harm to the individuals' reputations and emotional well-being, which falls under violations of human rights and harm to communities. The article also references legal frameworks addressing such misuse, confirming the recognition of harm. Therefore, this event meets the criteria for an AI Incident due to the direct harm caused by the AI system's misuse.
Thumbnail Image

Deepfake Technology Raises Concerns about Prime Minister Modi's Image Manipulation

2023-11-18
worldreportnow.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of deepfake technology, an AI system, to create manipulated videos that have harmed the reputation of Prime Minister Modi. This is a direct example of AI-generated disinformation causing harm to a person and potentially to the broader community by manipulating public opinion. The harm is realized, not just potential, as the incident has already occurred. Hence, it meets the criteria for an AI Incident under the OECD framework.
Thumbnail Image

Surge in Deepfake Technology Scams: Quote by Sophos Advanced AI is being used to create deepfake videos and images from public social media profiles.

2023-11-16
Bollyinside - Breaking & latest News worldwide
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI being used to create deepfake videos and images from public social media profiles, which are being used in scams. This constitutes harm to individuals and communities through misinformation and deception, fitting the definition of an AI Incident. The government's advisory and potential penalties for social media intermediaries further confirm the recognition of realized harm. The suggestion of digitally signed videos as a future mitigation is complementary information but does not negate the current incident status.
Thumbnail Image

Deepfake Video of Indian Actress Kajol Changing Stirs Controversy

2023-11-16
Sputnik India
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (deepfake technology) used to create manipulated video content that harms the individual by violating her rights and spreading fake information. The harm is realized as the video is publicly circulated, causing reputational and personal harm. This fits the definition of an AI Incident because the AI system's use has directly led to violations of rights and harm to the individual and community.
Thumbnail Image

Key Points From The Sophos Quote There Has Been A Notable Increase In Fraudulent Activities Involving Deepfake Technology, Along With The Growing Adoption Of Artificial Intelligence (AI) Systems - AI Next

2023-11-18
Latest News on AI, Healthcare & Energy updates in India
Why's our monitor labelling this an incident or hazard?
Deepfake technology is an AI system that generates manipulated videos, and its use in fraudulent activities directly harms individuals and communities by spreading misinformation. The article reports on actual incidents of deepfake dissemination and government actions to address these harms, indicating that harm has materialized. Therefore, this event meets the criteria for an AI Incident due to the direct link between AI-generated deepfakes and realized harm to people and communities.
Thumbnail Image

Modi Flags Deepfake Issue After Doctored Singing Video

2023-11-17
Sputnik India
Why's our monitor labelling this an incident or hazard?
The article discusses the societal and governance response to the existing and potential harms caused by AI-generated deepfakes, including legal measures and requests for detection tools. While it acknowledges that harm from AI misuse is occurring, the main focus is on raising awareness and policy responses rather than detailing a specific AI Incident or a new AI Hazard event. Therefore, it fits best as Complementary Information, providing context and updates on responses to AI-related harms rather than reporting a distinct incident or hazard.
Thumbnail Image

Deepfake video of Kajol rattles the internet amid Rashmika Mandanna controversy

2023-11-17
NEWS9LIVE
Why's our monitor labelling this an incident or hazard?
The event describes the use of AI systems to create deepfake videos that impersonate individuals without consent, constituting a violation of personal rights and potentially causing psychological harm. The involvement of police investigations and legal frameworks highlights that harm has occurred or is ongoing. Therefore, this qualifies as an AI Incident due to violations of rights and harm to individuals resulting from the AI system's misuse.
Thumbnail Image

After Rashmika Mandanna and Katrina Kaif, now a deepfake video of Kajol goes viral

2023-11-17
ETV Bharat News
Why's our monitor labelling this an incident or hazard?
The video is a deepfake, which is an AI-generated manipulated video that replaces a person's face with another's. This use of AI has directly led to harm in the form of misinformation and reputational damage to the celebrity Kajol. The viral spread of such content can cause harm to communities by spreading false narratives and violating personal rights. Therefore, this qualifies as an AI Incident due to the realized harm caused by the AI system's use in creating and disseminating the deepfake video.
Thumbnail Image

PM Modi addresses nation, speaks on the 'deepfake' threat to Indian society

2023-11-17
The Financial Express
Why's our monitor labelling this an incident or hazard?
The article describes concerns about deepfake videos created using AI, which can cause harm by spreading misinformation and disrupting society. However, the article does not report a specific AI incident causing direct or indirect harm but rather discusses the potential threat and calls for vigilance. This fits the definition of an AI Hazard, as the development and use of AI systems generating deepfakes could plausibly lead to harm, but no concrete harm event is detailed. Therefore, the classification is AI Hazard.
Thumbnail Image

PM Modi raises concerns over misuse of AI, says deepfake videos a big issue for society

2023-11-17
http://www.uniindia.com/fadnavis-orders-probe-into-mumbai-pub-fire/states/news/1090400.html
Why's our monitor labelling this an incident or hazard?
The article centers on concerns about the potential misuse of AI (deepfake videos) and the societal problems they could cause, but it does not document any realized harm or a specific AI-related incident. The mention of deepfake videos circulating and causing outrage is contextual but not detailed as an incident causing direct harm. The PM's request to flag deepfakes and warnings is a preventive measure. Therefore, this event fits the definition of an AI Hazard, as it plausibly could lead to harm, but no actual harm is reported here. It is not Complementary Information because it is not updating or responding to a previously reported incident but raising new concerns. It is not Unrelated because it involves AI misuse concerns. Hence, the classification is AI Hazard.
Thumbnail Image

'நான் நடனமாடுவது போன்ற போலி வீடியோ' - டீப்-பேக் தொழில்நுட்பம் குறித்து பிரதமர் மோடி பேச்சு

2023-11-17
DailyThanthi
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI-based deepfake technology to create fake videos that have been widely shared and believed to be real, causing reputational and informational harm. This fits the definition of an AI Incident because the AI system's use has directly led to harm through misinformation and violation of individuals' rights. The Prime Minister's concern highlights the societal impact of such AI misuse.
Thumbnail Image

"என்னை வைத்து டீப்ஃபேக் வீடியோ... மிகவும் கவலை அளிக்கிறது!" - பிரதமர் மோடி

2023-11-17
Hindu Tamil Thisai
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (deepfake technology) to create manipulated videos that have already been disseminated widely, causing reputational harm and social concern. This constitutes harm to communities and individuals through misinformation and violation of personal rights. The article describes realized harm from the use of AI-generated deepfakes, qualifying it as an AI Incident. Additionally, it mentions legal and governance responses, but the primary focus is on the harm caused by the AI system's use.
Thumbnail Image

போலி வீடியோக்கள் விவகாரம் தொடர்பாக சமூக வலைதளங்களுக்கு மத்திய அரசு சம்மன்

2023-11-20
Hindu Tamil Thisai
Why's our monitor labelling this an incident or hazard?
Deepfake videos are created using AI systems that generate synthetic media. The article states that these deepfake videos are actively spreading on social media, causing social disturbance and concern. This is a direct harm to communities and individuals' rights, fulfilling the criteria for an AI Incident. The government's summons and planned actions are responses to this incident, but the primary event is the ongoing harm from AI-generated deepfakes.
Thumbnail Image

மோடியின் வீடியோவுக்கு மட்டும் துரித நடவடிக்கை

2023-11-18
தீக்கதிர்
Why's our monitor labelling this an incident or hazard?
Deepfake videos are generated using AI systems that synthesize realistic but fake videos. The circulation of such videos involving celebrities and the Prime Minister constitutes a violation of privacy and potentially other rights, causing harm to individuals and communities. Since the videos have been released and are viral, the harm is realized, making this an AI Incident. The government's legal warnings are responses to this harm but do not negate the incident classification. The article does not describe only potential harm or general AI news, but actual harm caused by AI-generated content.
Thumbnail Image

போலி வீடியோக்கள் விவகாரம் தொடர்பாக சமூக வலைதளங்களுக்கு சம்மன்

2023-11-20
Malaysiakini
Why's our monitor labelling this an incident or hazard?
Deepfake technology is an AI system that generates realistic fake media content. The article explicitly mentions AI-assisted deepfake videos causing social disruption and harm to individuals' reputations. The dissemination of such content on social media platforms constitutes harm to communities and individuals, fulfilling the criteria for an AI Incident. The government's legal and investigative responses further confirm the recognition of realized harm. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

PM Modi: பரவும் Deep Fake AI வீடியோக்கள்.. கவலை தெரிவித்த பிரதமர் மோடி..

2023-11-18
tamil.abplive.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used to create deepfake videos, which are AI-generated manipulated content. The spread of these videos has already caused harm by misleading the public and damaging the reputations of public figures, including the Prime Minister himself. This constitutes harm to communities and a violation of rights. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm. The government's response and legal measures are complementary information but the main event is the realized harm from AI-generated deepfakes.
Thumbnail Image

விஸ்வரூபம் எடுத்த டீப் ஃபேக் விவகாரம்.. அதிரடி காட்டும் மத்திய அரசு

2023-11-18
tamil.abplive.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly, as deepfake videos are generated using AI technology. The harm has already occurred through the dissemination of manipulated videos that misrepresent individuals, causing reputational and social harm. The government's regulatory response and engagement with social media platforms are complementary information but the core issue is an ongoing AI Incident due to realized harm from AI-generated deepfakes. Therefore, this qualifies as an AI Incident under the framework because the AI system's use has directly led to harm to communities and violations of rights.