Delhi Police Investigates Viral Rashmika Mandanna Deepfake Video Incident

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A deepfake video of actress Rashmika Mandanna, created using AI, went viral on social media, causing reputational harm. Delhi Police registered an FIR under IPC and IT Act sections, initiated an investigation, and requested Meta for account details. The Delhi Commission for Women also issued a notice demanding action.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event explicitly involves an AI system (deepfake technology) used to create manipulated video content that harms the reputation and rights of a person (Rashmika Mandanna). The viral spread of this AI-generated fake video has caused significant harm, prompting police FIR registration and investigation, as well as intervention by a women's rights commission. This meets the criteria for an AI Incident because the AI system's use has directly led to harm (violation of rights and reputational damage).[AI generated]
AI principles
AccountabilityPrivacy & data governanceRespect of human rightsRobustness & digital securitySafetyTransparency & explainabilityHuman wellbeing

Industries
Media, social platforms, and marketingDigital security

Affected stakeholders
Women

Harm types
ReputationalHuman or fundamental rightsPsychological

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

रश्मिका मंदाना के डीपफेक वायरल वीडियो मामले में पुलिस ने दर्ज की FIR, शुरू हो गई है जांच

2023-11-11
Navbharat Times
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (deepfake technology) used to create manipulated video content that harms the reputation and rights of a person (Rashmika Mandanna). The viral spread of this AI-generated fake video has caused significant harm, prompting police FIR registration and investigation, as well as intervention by a women's rights commission. This meets the criteria for an AI Incident because the AI system's use has directly led to harm (violation of rights and reputational damage).
Thumbnail Image

रश्मिका मंदाना के डीपफेक वीडियो में कहां तक पहुंची जांच? जानें दिल्ली पुलिस ने क्या किया

2023-11-11
Navbharat Times
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (deepfake AI software) used to create manipulated video content that harms the reputation and privacy of an individual, Rashmika Mandanna. The police action and legal registration of a case indicate that harm has occurred. The AI system's use in generating the deepfake video is central to the incident, fulfilling the criteria for an AI Incident as it has directly led to violations of rights and reputational harm. The investigation and legal response further confirm the materialization of harm rather than a potential or future risk.
Thumbnail Image

रश्मिका मंदाना के Deepfake वीडियो केस में दिल्ली पुलिस का बड़ा एक्शन, उठाया यह कदम

2023-11-11
News18 India
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (deepfake technology) used to manipulate video content, leading to harm in the form of reputational damage and violation of rights of the actress. The police action and legal proceedings confirm that harm has occurred and is being addressed. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm (violation of rights and reputational damage).
Thumbnail Image

रश्मिका मंदाना डीपफेक केस में दिल्ली पुलिस ने मेटा से URL साझा करने को कहा

2023-11-11
News18 India
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system, specifically deepfake technology, which is used to create manipulated videos. The misuse of this AI-generated content has directly led to harm in terms of violation of personal rights and emotional distress to the individual involved. The police action and legal proceedings confirm that harm has materialized. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to harm and legal violations.
Thumbnail Image

Rashmika Deepfake Video: रश्मिका मंदाना डीपफेक वीडियो केस में दिल्ली पुलिस ने दर्ज की एफआईआर, जांच शुरू

2023-11-11
Hindustan
Why's our monitor labelling this an incident or hazard?
The event explicitly involves a deepfake video, which is a product of AI technology used to manipulate video content. The harm caused is reputational damage to the actress, which falls under violations of rights and harm to communities. The police have registered an FIR and initiated an investigation, indicating that harm has occurred and is being addressed. Hence, the event meets the criteria for an AI Incident due to the direct harm caused by the AI system's use (deepfake generation).
Thumbnail Image

किसने बनाया था रश्मिका मंदाना का 'डीप फेक' वीडियो? पुलिस ने मेटा को लिखा पत्र; मांगी जानकारी

2023-11-11
Hindustan
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system (deepfake technology) used to create a manipulated video that harms the reputation and privacy of Rashmika Mandanna. The police have taken legal action and are investigating the incident, indicating that harm has occurred. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to a violation of rights and reputational harm.
Thumbnail Image

जल्द गिरफ्तार होगा रश्मिका मंदाना का डीपफेक वीडियो बनाने का आरोपी, जानें दिल्ली पुलिस ने क्या कहा |delhi police register fir and writes meta for account details in deepfake video of rashmika mandanna case | Patrika News

2023-11-11
Patrika News
Why's our monitor labelling this an incident or hazard?
The deepfake video is an AI-generated manipulated content that infringes on the actress's rights and causes reputational harm, fitting the definition of harm to rights under AI Incident criteria. The police action and FIR registration confirm that harm has occurred. The involvement of AI in creating the deepfake is explicit, and the event describes realized harm, not just potential harm. Therefore, this is classified as an AI Incident.
Thumbnail Image

रश्मिका मंदाना के डीपफेक वीडियो मामले में पुलिस का एक्शन, दर्ज की एफआईआर | rashmika mandana deepfake video case delhi police registered fir

2023-11-11
Webdunia
Why's our monitor labelling this an incident or hazard?
Deepfake videos are generated using AI systems that manipulate visual content to create realistic but fake videos. The viral spread of such a video constitutes harm to the individual's rights and reputation, which falls under violations of rights and harm to communities. The police registering an FIR under relevant legal sections confirms that harm has materialized due to the AI system's misuse. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Rashmika Mandanna Deepfake Video Case | रश्मिका मंदाना डीपफेक वीडियो मामले में दिल्ली पुलिस ने FIR दर्ज कर शुरू की जांच | Navabharat (नवभारत)

2023-11-11
Navabharat
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system used to generate a deepfake video, which is a clear example of AI-generated content causing harm. The harm includes violation of personal rights and distress to the individual, as well as legal violations under IPC and IT Act. The police have registered an FIR and initiated investigation, indicating that harm has materialized and is recognized legally. Therefore, this qualifies as an AI Incident because the AI system's use directly led to harm and legal violations.
Thumbnail Image

Rashmika Mandanna के डीपफेक वीडियो पर एक्शन में DCW, स्वाति मालीवाल का दिल्ली पुलिस को नोटिस - Punjab Kesari

2023-11-11
Punjab Kesari
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (deepfake technology) used to manipulate a person's image, leading to harm in terms of violation of rights and potential reputational damage. The involvement of the DCW and police notice shows that the harm is recognized and action is underway. Therefore, this qualifies as an AI Incident because the AI-generated deepfake video has directly led to harm (violation of rights) and legal consequences.
Thumbnail Image

रश्मिका मंदाना के डीपफेक वीडियो को लेकर दिल्ली पुलिस ने दर्ज की प्राथमिकी

2023-11-10
ThePrint Hindi
Why's our monitor labelling this an incident or hazard?
A deepfake video is created using AI techniques to manipulate or fabricate realistic video content. The viral spread of such a video directly harms the individual's reputation and can cause broader social harm. The police action and legal provisions cited indicate recognition of harm caused by the AI-generated content. Therefore, this event qualifies as an AI Incident because the AI system's use (deepfake generation) has directly led to harm (defamation and reputational damage).
Thumbnail Image

डीप फेक वीडियो: दिल्ली पुलिस ने यूआरएल के लिए मेटा को लिखा पत्र

2023-11-11
ThePrint Hindi
Why's our monitor labelling this an incident or hazard?
The creation and sharing of a deepfake video using AI software constitutes an AI Incident because the AI system's use has directly led to reputational harm (a form of harm to communities and violation of rights). The police have registered a formal complaint and are taking action, indicating that harm has materialized. The involvement of AI in generating the fake video is explicit, and the harm is realized, not just potential.
Thumbnail Image

Deepfake Video: रश्मिका मंदाना का वीडियो किसने किया शेयर? दिल्ली पुलिस ने META से मांगी डिटेल

2023-11-11
NDTVIndia
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system used to create a deepfake video, which is a form of AI-generated manipulated content. The sharing of this video has caused reputational harm and legal violations, triggering police action and FIR registration. This constitutes direct harm caused by the AI system's use, meeting the criteria for an AI Incident under violations of human rights and legal protections. Therefore, this event is classified as an AI Incident.
Thumbnail Image

रश्मिका का Deepfake Video बनाने वाले की अब खैर नहीं, दिल्ली पुलिस ने Meta से मांगा वीडियो का URL

2023-11-11
News24 Hindi
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system (deepfake video generation) that has been used to create manipulated content causing harm to the individual's reputation and privacy, which are violations of rights under applicable law. The police action and FIR registration confirm that harm has occurred. Therefore, this qualifies as an AI Incident due to realized harm caused by the AI-generated deepfake video.
Thumbnail Image

दिल्ली महिला आयोग ने रश्मिका मंदाना डीप फेक वीडियो पर दिल्ली पुलिस से कार्रवाई रिपोर्ट मांगी

2023-11-11
NDTVIndia
Why's our monitor labelling this an incident or hazard?
A deepfake video is a product of AI-based generative technology that manipulates visual content to create realistic but fake videos. The creation and distribution of such a video of a public figure without consent causes reputational harm and violates legal rights. The police investigation and legal action confirm that harm has occurred. Therefore, this event meets the criteria of an AI Incident because the AI system's use directly led to a violation of rights and harm to the individual involved.
Thumbnail Image

रश्मिका मंदाना के Deepfake वीडियो मामले में दिल्ली पुलिस ने की FIR दर्ज, मेटा से भी लेगी मदद

2023-11-11
ThePrint Hindi
Why's our monitor labelling this an incident or hazard?
The creation and dissemination of a deepfake video directly involves AI systems capable of generating realistic fake content. The harm caused includes violation of personal rights, identity theft, and reputational damage, which fall under violations of human rights and harm to communities. Since the AI system's use has directly led to these harms, this qualifies as an AI Incident. The police action and investigation are responses to the incident, not the primary focus of the article, so this is not merely complementary information.
Thumbnail Image

अभिनेत्री रश्मिका मंदाना के डीपफेक वीडियो को लेकर दिल्ली पुलिस ने दर्ज किया मामला

2023-11-11
NDTVIndia
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (deepfake technology) used to create manipulated video content that harms an individual's rights and reputation. The harm is realized as the video has been widely circulated, causing distress to the actress and prompting legal action. This fits the definition of an AI Incident because the AI system's use has directly led to a violation of rights and harm to the individual. Therefore, the classification is AI Incident.
Thumbnail Image

ராஷ்மிகாவின் போலி வீடியோ விவகாரம்! அதிரடி சட்டத்தை பிறப்பித்த மத்திய அரசு!! | 3 Years Jail, 1 Lakh Fine: Centre's Reminder After Actor Rashmika Mandanna Deepfake Row

2023-11-08
தினமலர் - சினிமா
Why's our monitor labelling this an incident or hazard?
The creation and dissemination of a deepfake video using AI-based face-swapping technology constitutes the use of an AI system. The harm caused includes violation of personal rights and emotional harm to the individual, which falls under violations of human rights or breach of applicable law protecting fundamental rights. The government's introduction of a law to penalize such acts and enforce removal is a governance response to an AI Incident. Since the deepfake video has already circulated and caused harm, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

நடிகை ராஷ்மிகா போலி வீடியோ விவகாரம்; டெல்லி காவல்துறைக்கு மகளிர் ஆணையம் நோட்டீஸ்

2023-11-10
DailyThanthi
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used to generate deepfake videos, which have caused harm by spreading false and misleading content that damages the reputation and privacy of individuals. This constitutes a violation of rights and harm to communities through misinformation and defamation. The involvement of AI in creating the fake videos and the resulting harm meets the criteria for an AI Incident. The notice and warnings from authorities are responses to this incident, but the primary event is the realized harm caused by AI-generated deepfakes.
Thumbnail Image

நடிகை ராஷ்மிகாவின் போலி வீடியோ தொடர்பாக டெல்லி போலீசார் வழக்குப்பதிவு!

2023-11-11
DailyThanthi
Why's our monitor labelling this an incident or hazard?
The article explicitly states that the fake video was created using AI technology and has been widely disseminated on social media, causing reputational harm to the actress. The police have registered a case, indicating that harm has occurred and is being addressed legally. This meets the criteria for an AI Incident because the AI system's use directly led to harm (violation of rights and harm to community trust).
Thumbnail Image

ராஷ்மிகா விவகாரம் எதிரொலி | போலி வீடியோக்களை பரப்பினால் 3 ஆண்டு சிறை: மத்திய அரசு எச்சரிக்கை

2023-11-08
Hindu Tamil Thisai
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used to create deepfake videos (AI-generated synthetic media) that have been disseminated, causing reputational harm and misinformation. The government's warning and legal measures are responses to an ongoing harm caused by AI misuse. The AI system's use has directly led to harm (reputational and potential rights violations), fulfilling the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

ராஷ்மிகா போலி விடியோ விவகாரம்: 5 பிரிவுகளின் கீழ் வழக்கு!

2023-11-11
Dinamani
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI technology to create a deepfake video, which is an AI system generating fabricated content. The dissemination of this deepfake video has caused harm to the actress's reputation and privacy, which falls under violations of human rights and harm to communities. The law enforcement response and legal framework indicate that harm has materialized. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI-generated deepfake video.
Thumbnail Image

போலி காணொளிகளைப் பரப்பினால் 3 ஆண்டு சிறை

2023-11-08
Tamil Murasu
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used to create deepfake videos, which have caused harm by spreading false and misleading content about individuals. The government's legal measures to penalize such acts and mandate removal of fake content are responses to an existing AI Incident involving violations of rights and harm to individuals' reputations. Therefore, this qualifies as an AI Incident due to realized harm from AI-generated fake videos and the direct involvement of AI technology in creating the harmful content.
Thumbnail Image

ராஷ்மிகாவின் போலி வீடியோ விவகாரம்: காவல்துறைக்கு நோட்டீஸ் அனுப்பிய டெல்லி மகளிர் ஆணையம்! - Dinasuvadu

2023-11-10
dinasuvadu.com
Why's our monitor labelling this an incident or hazard?
The event describes a deepfake video created by AI-based face-swapping technology (an AI system) that has caused harm to the actress's reputation and personal rights, which qualifies as harm to individuals and communities. The involvement of AI in creating the manipulated video is explicit, and the harm has already occurred, making this an AI Incident. The police investigation and legal actions are responses to this incident but do not change the classification.
Thumbnail Image

Rashmika Mandanna: ராஷ்மிகா மந்தனாவின் போலி வீடியோ - டெல்லி போலீசாருக்கு நோட்டீஸ் அனுப்பிய மகளிர் ஆணையம்

2023-11-10
tamil.abplive.com
Why's our monitor labelling this an incident or hazard?
The use of AI technology to create a fake video that misrepresents a person and spreads misinformation is a direct harm to the individual's rights and reputation, which falls under violations of human rights or breach of obligations protecting fundamental rights. Since the AI system's use has directly led to this harm, this qualifies as an AI Incident.
Thumbnail Image

நடிகை ராஷ்மிகாவின் Deep fake வீடியோ - 5 பிரிவுகளில் வழக்குப்பதிவு! - Dinasuvadu

2023-11-11
dinasuvadu.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves a deepfake video, which is generated using AI technology to manipulate visual content. The harm includes violation of the actress's rights and reputational damage, which are direct harms caused by the AI system's use. The police action and legal notices confirm that the harm is realized and being addressed. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

ராஷ்மிகா மந்தனா போலி வீடியோ; 5 பிரிவுகளில் டெல்லி போலீஸ் வழக்குப் பதிவு

2023-11-11
Indian Express Tamil
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system used to create a deepfake video that misrepresents a person, causing harm to her reputation and leading to police investigation and legal proceedings. This fits the definition of an AI Incident because the AI system's use has directly led to a violation of rights and harm to the individual. The police FIR and investigation confirm that harm has materialized, not just a potential risk. Therefore, the classification is AI Incident.
Thumbnail Image

Rashmika Mandanna Viral Video: Lincoln-Stalin-Katrina और अब Rashmika तक History of Deepfake AI

2023-11-07
hindi
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (deepfake AI) to create manipulated video content that has gone viral, causing social and reputational harm. Deepfake videos can lead to violations of individual rights and harm to communities by spreading misinformation or damaging reputations. Since the video is already viral and causing controversy, the harm is realized rather than just potential. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI-generated deepfake content.
Thumbnail Image

कैसे बनते हैं Deepfake वीडियोज ? कैसे करें इनकी पहचान ? जानें आसान तरीका

2023-11-07
Prabhat Khabar - Hindi News
Why's our monitor labelling this an incident or hazard?
The article discusses AI-generated deepfake videos and their identification but does not report a specific harmful event or a credible risk event involving AI systems. It is informational and aims to educate readers about the technology and its detection, which fits the definition of Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

Explainer: DeepFake AI टेक्नोलॉजी क्या है? कैसे करें असली-नकली की पहचान, यहां जानें सबकुछ

2023-11-07
India TV Hindi
Why's our monitor labelling this an incident or hazard?
The article is educational and informative about Deepfake AI technology and its potential for misuse, but it does not describe a concrete event involving harm or a credible risk of harm from a specific AI system. It neither reports an AI Incident nor an AI Hazard. It also does not focus on responses, governance, or updates related to AI incidents. Therefore, it fits best as Complementary Information, providing context and understanding about AI technology and its societal implications without reporting a new incident or hazard.
Thumbnail Image

Deepfake | आसान भाषा में समझे क्या है डीपफेक टेक्नालॉजी? जानें कैसे काम करता है और क्या है पहचानने का तरीका | Navabharat (नवभारत)

2023-11-09
Navabharat
Why's our monitor labelling this an incident or hazard?
The article is educational and informative about deepfake AI technology and its potential misuse but does not describe a specific AI Incident or AI Hazard event. It does not report any realized harm or a particular event where deepfake AI caused harm or a credible imminent risk. It also discusses legal frameworks and detection methods, which are complementary information to understanding AI impacts. Therefore, it fits best as Complementary Information rather than an Incident or Hazard.
Thumbnail Image

अकेले Rashmika Mandanna का ही नहीं, इस साल इंटरनेट पर आए 1.43 लाख Deepfake Video

2023-11-07
आज तक
Why's our monitor labelling this an incident or hazard?
The article describes the use of AI systems (deep learning algorithms) to create deepfake videos, which are known to cause harm such as violations of privacy, reputational damage, and potential psychological harm. However, it does not detail a specific event where harm has already occurred due to a particular AI system's use or malfunction. Instead, it highlights the scale of the problem and the potential for harm, as well as responses to detect and mitigate such content. Therefore, this fits the definition of an AI Hazard, as the development and use of deepfake AI systems could plausibly lead to incidents of harm, but no specific incident is described as having occurred in this article.
Thumbnail Image

रश्मिका मंदाना Deepfake Video: आप भी हो सकते हैं शिकार, ऐसे करें अपना बचाव और Deepfake की पहचान

2023-11-07
आज तक
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems (Deepfake generation using AI models) and discusses potential harms such as misinformation and deception. However, it does not describe a concrete AI Incident where harm has directly or indirectly occurred, nor does it describe a specific AI Hazard event where harm could plausibly occur imminently. Instead, it provides background information, expert insights, and advice on detection and awareness, which fits the definition of Complementary Information. Therefore, the classification is Complementary Information.
Thumbnail Image

रश्मिका मंदाना के Deepfake वीडियो पर सरकार सख्त, IT मिनिस्टर ने दी चेतावनी

2023-11-07
आज तक
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (deepfake technology using AI and machine learning) to create manipulated videos that harm an individual's reputation and privacy, which is a violation of rights. The harm is realized as the deepfake video is already viral, causing reputational and possibly psychological harm. The government's legal response and warnings to platforms further confirm the recognition of harm caused. Hence, this is an AI Incident as the AI system's use has directly led to harm and legal consequences.
Thumbnail Image

रश्मिका, कटरीना के बाद अब सारा और शुबमन Deepfake का शिकार, क्या है ये टेक्नोलॉजी? कैसे बच सकते हैं आप

2023-11-09
आज तक
Why's our monitor labelling this an incident or hazard?
The event involves the use and misuse of an AI system (deepfake technology) that has directly led to harms such as defamation, blackmail, and extortion, which are violations of rights and harm to individuals and communities. Since these harms are occurring and the AI system's role is pivotal in enabling these harms, this qualifies as an AI Incident.
Thumbnail Image

रश्मिका मंदाना के Deepfake वीडियो से मचा बवाल, क्या है ये तकनीक जिससे नकली भी असली लगता है

2023-11-06
आज तक
Why's our monitor labelling this an incident or hazard?
The article focuses on explaining deepfake AI technology, its capabilities, and the risks it poses, including potential misuse for scams and blackmail. It highlights the plausible future harms that deepfakes could cause and mentions government preparations to counter such threats. Since no specific harm or incident is reported as having occurred, and the main content is about the technology and responses to it, this fits the definition of an AI Hazard with elements of Complementary Information. However, because the article primarily discusses the potential for harm and the technology's risks rather than a concrete incident or a detailed governance response, the classification as AI Hazard is most appropriate.
Thumbnail Image

Suspect Detained For Actor's Deepfake Clip | Delhi News - Times of India

2023-11-15
The Times of India
Why's our monitor labelling this an incident or hazard?
Deepfake videos are created using AI systems that generate manipulated content. The circulation of such a video has directly led to reputational harm to the individual depicted, which qualifies as harm to a person or group. The involvement of law enforcement and formal complaints confirms that harm has materialized. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Delhi Police questions Bihar youth in Rashmika Mandanna's deepfake video case | India News - Times of India

2023-11-15
The Times of India
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI technology to create a deepfake video, which is an AI system generating manipulated content. The viral spread of this video constitutes a violation of rights and harm to the individual depicted, fulfilling the criteria for an AI Incident. The police action and FIR confirm that harm has materialized rather than being a potential risk.
Thumbnail Image

Quick redress mechanism needed to address the deepfake problem: Experts

2023-11-17
Business Standard
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions a deepfake AI-generated video, which is an AI system generating synthetic content. The harm caused includes forgery, identity theft, privacy violation, and damage to reputation, all of which are harms to human rights and personal dignity. The registration of an FIR indicates that harm has already occurred. Therefore, this event qualifies as an AI Incident because the AI system's use has directly led to violations of rights and harm to an individual.
Thumbnail Image

After Rashmika Mandanna and Katrina Kaif, Kajol's deepfake GRWM video rattles the internet | Hindi Movie News - Times of India

2023-11-16
The Times of India
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake videos, which are created using AI systems capable of generating realistic manipulated content. The misuse of these AI systems has directly caused harm to individuals' reputations and privacy, triggering legal and police responses. This constitutes an AI Incident because the AI system's use has directly led to violations of rights and harm to individuals and communities through misinformation and forgery.
Thumbnail Image

Rashmika Mandanna's deepfake clip: Suspect from Bihar has been detained | Hindi Movie News - Times of India

2023-11-16
The Times of India
Why's our monitor labelling this an incident or hazard?
The event explicitly involves a deepfake video, which is generated using AI techniques. The video caused reputational harm to Rashmika Mandanna, leading to legal complaints and police action. This fits the definition of an AI Incident because the AI system's use (deepfake generation) directly led to harm (violation of rights and reputational damage). The detention of a suspect and government reminders to social media platforms further confirm the seriousness and realized harm of the incident.
Thumbnail Image

Rashmika Mandanna DEEPFAKE: Delhi Police question 19-year-old Bihar youth for allegedly posting the morphed video - Times of India Videos

2023-11-16
The Times of India
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system used to create a deepfake video, which is a form of AI-generated manipulated content. The harm has materialized as the video went viral, causing reputational harm and legal consequences. The police investigation and FIR registration under forgery and IT laws confirm the recognition of harm caused by the AI-generated content. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's use in creating and distributing the deepfake video.
Thumbnail Image

Delhi Cops Questions Bihar Teen In Rashmika Mandanna's Deepfake Video Case

2023-11-15
NDTV
Why's our monitor labelling this an incident or hazard?
The event describes the use of AI technology (deepfake) to create a manipulated video that was shared widely, causing reputational harm to the actress. The involvement of AI in generating the deepfake and its distribution leading to legal action and police investigation fits the definition of an AI Incident, as it directly led to harm (violation of rights and reputational damage).
Thumbnail Image

Delhi Police Questions Bihar Youth in Rashmika Mandanna's Deepfake Video Case - News18

2023-11-15
News18
Why's our monitor labelling this an incident or hazard?
The event describes the circulation of a deepfake video, which is an AI system-generated manipulated content. The harm caused is a violation of the actor's rights, specifically reputation and privacy, which falls under violations of human rights or breach of applicable law. Since the harm has already occurred and the AI system's use is central to the incident, this qualifies as an AI Incident.
Thumbnail Image

Rashmika Mandanna Viral Video Row: Delhi Police Interrogates Bihar Youth In Connection With The Case

2023-11-15
Jagran English
Why's our monitor labelling this an incident or hazard?
The event explicitly involves a deepfake video, which is a product of AI systems capable of generating realistic but fake content. The video caused reputational harm to the actor Rashmika Mandanna and led to police investigation and legal proceedings under laws related to forgery and harm to reputation. The AI system's use in creating and distributing the deepfake video directly led to harm, fulfilling the criteria for an AI Incident. The interrogation and FIR filing confirm the harm has materialized, not just a potential risk.
Thumbnail Image

Rashmika Mandanna deepfake video: Delhi Police question Bihar teen

2023-11-16
The Financial Express
Why's our monitor labelling this an incident or hazard?
The event describes a deepfake video created using AI to alter a person's face, which was then circulated widely, causing reputational and emotional harm to the individual. The use of AI in generating the manipulated video is explicit, and the harm (violation of rights, identity theft, reputational damage) has already occurred. This fits the definition of an AI Incident as the AI system's use directly led to harm to the individual and community.
Thumbnail Image

19-year-old from Bihar arrested in connection with Rashmika Mandanna deepfake row

2023-11-16
Asianet News Network Pvt Ltd
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system used to create a deepfake video, which is a form of AI-generated manipulated content. The harm caused is a violation of privacy and dignity, which falls under violations of human rights and fundamental rights. The AI system's use directly led to this harm, making this an AI Incident. The involvement of law enforcement and Meta's cooperation further confirms the seriousness and realized harm of the incident.
Thumbnail Image

Delhi cops question 19-year-old from Bihar over deep fake video

2023-11-16
The New Indian Express
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system used to create a deepfake video, which is a manipulated video generated by AI techniques. The video was widely circulated, causing reputational harm to the individual depicted, which is a violation of rights under applicable law. The police investigation and FIR registration confirm that harm has occurred. Therefore, this qualifies as an AI Incident due to the realized harm caused by the AI-generated deepfake video.
Thumbnail Image

Delhi Police questions Bihar youth in Rashmika Mandanna's deepfake video case

2023-11-15
ThePrint
Why's our monitor labelling this an incident or hazard?
The creation and dissemination of a deepfake video involves the use of AI systems to generate manipulated content. The video has been uploaded and circulated, causing harm to the reputation of the person depicted, which is a violation of rights under applicable law. The police investigation and FIR registration confirm that harm has occurred. Therefore, this event qualifies as an AI Incident due to realized harm caused by the use of an AI system (deepfake).
Thumbnail Image

Rashmika Mandanna Deepfake Video: Delhi police questions 19-year old from Bihar, suspect he uploaded actress' video

2023-11-19
Firstpost
Why's our monitor labelling this an incident or hazard?
The event describes the use of AI-generated deepfake technology to create and distribute manipulated video content that harms the reputation of the actress Rashmika Mandanna. The involvement of AI in generating the deepfake video is explicit, and the harm caused (reputational damage and potential violation of privacy and rights) is realized, as evidenced by the police investigation and FIR registration. Hence, it meets the criteria for an AI Incident due to direct harm caused by the AI system's use.
Thumbnail Image

Delhi Police Investigates Bihar Youth Over Rashmika Mandanna Deepfake Video Circulation

2023-11-15
The Hans India
Why's our monitor labelling this an incident or hazard?
The event explicitly involves a deepfake video, which is a product of AI-based generative technology. The circulation of this manipulated content has led to legal proceedings under forgery and IT laws, indicating realized harm to the reputation of the person depicted and potential broader social harm. The AI system's use in creating and distributing the deepfake is central to the incident, fulfilling the criteria for an AI Incident due to violation of rights and harm to communities.
Thumbnail Image

Delhi Police questions 18-yr-old in Rashmika Mandanna's deepfake video case

2023-11-15
The Siasat Daily
Why's our monitor labelling this an incident or hazard?
The creation and dissemination of a deepfake video using AI technology directly led to reputational harm and legal action, fulfilling the criteria for an AI Incident. The AI system's use in generating the deepfake video is central to the incident, and the harm to the individual's reputation and potential broader social harm from misinformation are realized harms. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Delhi Police questions Bihar youth in Rashmika Mandanna's deepfake video case

2023-11-15
mid-day
Why's our monitor labelling this an incident or hazard?
The event describes the circulation of a deepfake video, which is a product of AI technology used to create manipulated content. The harm is realized as reputational damage and legal violations have occurred, leading to police action. The AI system's use in creating the deepfake directly led to harm to the individual's reputation, qualifying this as an AI Incident under the framework.
Thumbnail Image

Delhi Police questions Bihar youth in Rashmika Mandanna's deepfake video case | Entertainment

2023-11-15
Devdiscourse
Why's our monitor labelling this an incident or hazard?
The event explicitly involves a deepfake video, which is generated using AI techniques. The circulation of this manipulated video has caused harm to the reputation of the individual depicted, which is a violation of rights and harm to communities. The police investigation and FIR registration confirm that harm has occurred. Therefore, this is an AI Incident due to the realized harm caused by the AI-generated deepfake content.
Thumbnail Image

Latest News | Delhi Police Questions Bihar Youth in Rashmika Mandanna's Deepfake Video Case | LatestLY

2023-11-15
LatestLY
Why's our monitor labelling this an incident or hazard?
The event explicitly involves a deepfake video, which is a product of AI technology used to create manipulated media. The video was widely circulated, causing reputational harm to the actor Rashmika Mandanna, which falls under violations of rights and harm to communities. The police investigation and FIR registration indicate that harm has occurred. Therefore, this qualifies as an AI Incident due to the realized harm caused by the AI-generated deepfake content.
Thumbnail Image

Rashmika Mandanna deepfake video: Delhi Police questions 19-year-old from Bihar

2023-11-15
NewsBytes
Why's our monitor labelling this an incident or hazard?
The event describes a deepfake video created using AI-based deep learning techniques to manipulate video content, which was then shared on social media causing harm to the depicted individual. The harm is realized as the video went viral and the individual affected publicly expressed concern about the misuse of technology. The police investigation and involvement of social media platforms further confirm the AI system's role in causing harm. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI-generated deepfake video violating rights and causing reputational damage.
Thumbnail Image

Delhi Police questions Bihar youth in Rashmika Mandanna's deepfake video case

2023-11-15
Press Trust of India
Why's our monitor labelling this an incident or hazard?
Deepfake videos are generated using AI systems that manipulate visual content to create realistic but fake videos. The creation and distribution of such videos can cause harm to individuals' reputations and privacy, which falls under violations of human rights or breach of obligations intended to protect fundamental rights. Since the video was widely circulated and the police are investigating the uploader, the event reflects an AI Incident where the AI system's use has directly or indirectly led to harm.
Thumbnail Image

Rashmika Mandana DeepFake video: Must Read! 19 year old Bihar youth

2023-11-16
Tellychakkar.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (DeepFake technology) used to create a forged video that harms Rashmika Mandanna's reputation. The legal actions and police investigation confirm that harm has occurred due to the AI system's use. The harm is a violation of rights and a breach of legal protections against forgery and impersonation. Therefore, this meets the criteria for an AI Incident as the AI system's use directly led to harm.
Thumbnail Image

Rashmika Mandanna Deepfake controversy: Police interrogate Bihar teenager who allegedly uploaded the viral video

2023-11-15
NewsroomPost
Why's our monitor labelling this an incident or hazard?
The event explicitly involves a deepfake video, which is a product of AI-based generative technology. The video caused reputational harm and violated the actress's privacy, which are recognized harms under the AI Incident definition (violations of human rights and breach of applicable law). The AI system's use (deepfake generation) directly led to these harms. The police investigation and legal actions further confirm the seriousness of the incident. Hence, this is an AI Incident.
Thumbnail Image

Rashmika Mandanna's deepfake video: Delhi Police find 'important clues'; to make arrest soon | Hindi Movie News - Times of India

2023-11-24
The Times of India
Why's our monitor labelling this an incident or hazard?
The deepfake video is an AI-generated manipulated content that harms the reputation and privacy of Rashmika Mandanna, constituting a violation of rights. The police investigation and planned arrest show that harm has occurred or is ongoing. The involvement of AI in creating the deepfake and the resulting harm to the individual classify this as an AI Incident.
Thumbnail Image

Got vital clues in Rashmika Mandanna's DEEPFAKE video case; accused will be arrested soon: Delhi Police | Etimes - Times of India Videos

2023-11-24
The Times of India
Why's our monitor labelling this an incident or hazard?
The event describes the creation and distribution of a deepfake video, which is an AI-generated manipulated video that replaces one person's face with another's. This constitutes a violation of personal rights and can cause harm to the individual's reputation and privacy, fitting the definition of an AI Incident under violations of human rights or breach of applicable law. The police investigation and government response further confirm the seriousness of the harm caused by the AI system's misuse.
Thumbnail Image

Verifying Vital Clues In Rashmika Mandanna's Deepfake Video Case, Say Cops

2023-11-23
NDTV
Why's our monitor labelling this an incident or hazard?
The event describes a deepfake video created using AI to alter a woman's face to resemble Rashmika Mandanna. This AI-generated content has been uploaded and circulated, causing harm by violating the actor's rights and potentially misleading the public. The police investigation and government response indicate recognition of the harm caused. Since the AI system's use has directly led to a violation of rights and harm to the community, this qualifies as an AI Incident.
Thumbnail Image

After Rashmika Mandanna's deepfake video row, IT minister's 7-day deadline for social media

2023-11-24
Hindustan Times
Why's our monitor labelling this an incident or hazard?
The deepfake video is an AI-generated fabricated video that has already caused harm by misleading viewers and potentially damaging the reputation of the individual depicted. The involvement of AI in creating the deepfake and the resulting harm to the individual and society (misinformation, potential violation of rights) qualifies this as an AI Incident. The article details ongoing harm and law enforcement investigation, confirming realized harm rather than just potential risk. The government's regulatory response and platform development are complementary but do not negate the incident classification.
Thumbnail Image

Meta not cooperating in Rashmika Mandhana deepfake probe: Delhi Police sources

2023-11-24
India Today
Why's our monitor labelling this an incident or hazard?
The event describes the creation and dissemination of deepfake videos using AI technology, which directly harms the individuals depicted by violating their rights and potentially causing reputational damage. The police investigation and the challenges in tracing the perpetrator highlight the AI system's role in causing harm. Therefore, this meets the criteria for an AI Incident as the AI system's use has directly led to harm to a person (violation of rights and reputational harm).
Thumbnail Image

Deepfake video of Rashmika Mandanna: Delhi Police verifying vital clues through technical analysis

2023-11-23
ThePrint
Why's our monitor labelling this an incident or hazard?
The deepfake video is an AI-generated manipulated content that has been distributed online, causing reputational and privacy harm to the individual depicted. This fits the definition of an AI Incident because the AI system's use (deepfake generation) has directly led to harm to a person and communities. The investigation and legal response further confirm the materialization of harm rather than a potential risk. Therefore, this event is classified as an AI Incident.
Thumbnail Image

Latest News | Deepfake Video of Rashmika Mandanna: Delhi Police Verifying Vital Clues Through Technical Analysis | LatestLY

2023-11-23
LatestLY
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system used to create a deepfake video, which is a form of AI-generated manipulated content. The video has been disseminated, causing harm to the individual's rights and reputation, which constitutes a violation of human rights and harm to communities. The police investigation confirms that the AI system's use has directly led to harm. Therefore, this qualifies as an AI Incident under the framework.
Thumbnail Image

Deepfake video of Rashmika Mandanna: Delhi Police verifying vital clues through technical analysis

2023-11-23
Press Trust of India
Why's our monitor labelling this an incident or hazard?
Deepfake videos are generated using AI systems that create manipulated realistic content. The presence of a deepfake video directly involves AI system use. The harm caused includes violation of the individual's rights and potential reputational damage, which is a recognized harm under the framework. The police investigation confirms the event is a realized incident, not just a potential hazard. Hence, this is classified as an AI Incident.
Thumbnail Image

Got vital clues in Mandanna deepfake video case: Police

2023-11-23
The Pioneer
Why's our monitor labelling this an incident or hazard?
The event describes a deepfake video of actor Rashmika Mandanna, which is an AI-generated manipulated video. The video has been uploaded and circulated, causing harm to the individual and potentially to the community by spreading false information. The police investigation and FIR registration confirm that harm has materialized. The AI system's use (deepfake generation) directly led to the harm, fulfilling the criteria for an AI Incident.
Thumbnail Image

Probe Into Actor Rashmika Mandanna's Deepfake Video Hits Dead End

2023-11-24
NDTV
Why's our monitor labelling this an incident or hazard?
The event describes a deepfake video, which is an AI-generated manipulated video, causing harm to an individual by spreading false and harmful content. This constitutes a violation of personal rights and can be considered harm to the individual and community. The AI system's misuse directly led to this harm, fulfilling the criteria for an AI Incident. The inability to identify the perpetrators does not negate the realized harm from the AI-generated content.
Thumbnail Image

Deepfake Case Probe Hits Dead End: Social Media Platforms Unable to Trace Culprits

2023-11-24
TimesNow
Why's our monitor labelling this an incident or hazard?
Deepfake technology is an AI system that generates synthetic media. The creation and dissemination of fake profiles using deepfakes constitutes a violation of personal rights and can cause reputational harm, which is a form of harm to individuals and communities. The article confirms that such content was created and circulated, indicating realized harm. The inability of social media platforms to trace the culprits does not negate the occurrence of harm; it only affects the investigation's progress. Hence, this event is classified as an AI Incident due to the realized harm caused by AI-generated deepfake content.
Thumbnail Image

Probe into deepfake video of actor Rashmika Mandanna hits dead end

2023-11-24
OpIndia
Why's our monitor labelling this an incident or hazard?
The event explicitly involves a deepfake video, which is generated by AI systems capable of creating realistic manipulated videos. The harm is realized as the actor has expressed emotional distress and reputational damage, which falls under harm to a person. The use of fake accounts and VPNs to distribute the content indicates malicious use of AI-generated content. The investigation and police involvement further confirm the seriousness of the harm. Therefore, this event meets the criteria for an AI Incident due to the direct harm caused by the AI system's use.
Thumbnail Image

Alia Bhatt's deep fake video goes viral on social media

2023-11-25
OpIndia
Why's our monitor labelling this an incident or hazard?
The event clearly involves AI systems (deepfake technology using machine learning and face-swapping AI) whose use has directly led to harm, including violation of personal rights and reputational harm to the individuals depicted. The viral spread of these manipulated videos on social media constitutes harm to communities and individuals. The distress expressed by the victims and the governmental response further confirm the realized harm. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to significant harm.
Thumbnail Image

Probe in deepfake case of actor Rashmika Mandanna hits dead end | Headlines

2023-11-24
Devdiscourse
Why's our monitor labelling this an incident or hazard?
The event describes a deepfake video created using AI-based generative technology, which is a clear example of an AI system's use leading to harm. The harm includes emotional distress to the actor and potential reputational damage, which falls under harm to persons and communities. The investigation and police involvement confirm that the AI system's misuse has caused realized harm. Therefore, this qualifies as an AI Incident.
Thumbnail Image

India News | Probe in Deepfake Case of Actor Rashmika Mandanna Hits Dead End | LatestLY

2023-11-24
LatestLY
Why's our monitor labelling this an incident or hazard?
The event describes the creation and dissemination of a deepfake video, which is an AI-generated manipulated content causing harm to the actor's reputation and emotional well-being. The AI system's use in generating the deepfake video directly led to harm, fulfilling the criteria for an AI Incident. Although the investigation has not identified the perpetrator, the harm from the AI system's misuse is realized. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Probe in deepfake case of actor Rashmika Mandanna hits dead end

2023-11-24
Asian News International (ANI)
Why's our monitor labelling this an incident or hazard?
The event describes a deepfake video created using AI technology that impersonates an actor, causing harm to her reputation and emotional well-being. The video was shared on social media platforms, and the investigation is ongoing to identify the perpetrators. The use of AI to create and distribute manipulated content that harms individuals fits the definition of an AI Incident, as it directly leads to harm to a person and potentially violates rights. Although the investigation has hit a dead end in tracing the source, the harm has already occurred due to the AI-generated content.