Nora Fatehi Confronts Brand Over Viral Deepfake Ad

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Actress Nora Fatehi discovered a deepfake video of her likeness promoting a clothing brand’s sale, sparking viral circulation and reputational risk. She publicly denounced the ad as “fake” via Instagram Stories. The brand has yet to respond. The incident echoes prior deepfake misuse targeting celebrities, highlighting AI-driven identity fraud.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions the use of deepfake AI technology to create deceptive content that misleads people and promotes fraudulent activity. This constitutes a direct harm to individuals (financial fraud victims) and harm to the celebrity's reputation, fitting the definition of an AI Incident due to realized harm caused by the AI system's malicious use.[AI generated]
AI principles
Respect of human rightsPrivacy & data governanceTransparency & explainabilityAccountabilityRobustness & digital security

Industries
Consumer productsMedia, social platforms, and marketing

Affected stakeholders
Other

Harm types
ReputationalHuman or fundamental rightsPsychological

Severity
AI incident

Business function:
Marketing and advertisement

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

OMG! Nora Fatehi deepfake allegedly hits the internet, actor says she is 'shocked' | Business - Times of India

2024-01-22
The Times of India
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of deepfake AI technology to create deceptive content that misleads people and promotes fraudulent activity. This constitutes a direct harm to individuals (financial fraud victims) and harm to the celebrity's reputation, fitting the definition of an AI Incident due to realized harm caused by the AI system's malicious use.
Thumbnail Image

Nora Fatehi 'shocked' after her deep fake video goes viral | Etimes - Times of India Videos

2024-01-21
The Times of India
Why's our monitor labelling this an incident or hazard?
The article describes a deepfake video created using AI technology that replicates Nora Fatehi's voice and body language without her consent. The video is viral, indicating harm to the celebrity's reputation and potential misinformation to the public. The AI system's use here directly leads to harm in terms of violation of personal rights and reputational damage, fitting the definition of an AI Incident.
Thumbnail Image

Nora Fatehi calls out deepfake scam by brand using her photo - The Statesman

2024-01-21
The Statesman
Why's our monitor labelling this an incident or hazard?
The event explicitly describes the use of a deepfake AI system to generate a fake image and video of Nora Fatehi endorsing a brand's sale, which she publicly denounces as not being her. The use of deepfake technology here is an AI system's use leading to a violation of rights (image and voice likeness without consent) and potential harm to the celebrity's reputation and to consumers who are misled by the fake advertisement. This meets the criteria for an AI Incident as the AI system's use has directly led to harm in terms of rights violation and deception.
Thumbnail Image

Nora Fatehi's Deepfake Video Goes Viral; Actress' Denies To Be Part Of Retail Clothing Band Advertisement That Allegedly Shows Her - DEETS INSIDE | SpotboyE

2024-01-21
spotboye.com
Why's our monitor labelling this an incident or hazard?
The video is an AI-generated deepfake, which is an AI system's output used maliciously to create false content. The actress denies involvement, indicating harm through misrepresentation and potential violation of her rights. The event describes realized harm caused by the AI system's misuse, fitting the definition of an AI Incident due to violation of rights and harm to the individual depicted.
Thumbnail Image

Nora Fatehi Calls Out Brand For Using Her Deepfake Video: Here's The Truth Behind Viral Video

2024-01-21
IndiaTimes
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used to create deepfake videos, which are AI-generated synthetic media. The unauthorized use of Nora Fatehi's deepfake video in an advertisement and the creation of Rashmika Mandanna's deepfake video that led to an arrest demonstrate direct harm through violation of personal rights and potential reputational damage. These harms fall under violations of human rights and breach of obligations protecting fundamental rights. Hence, the event meets the criteria for an AI Incident as the AI system's use has directly led to harm.
Thumbnail Image

Nora Fatehi Deepfake Video: What the Controversy Is All About

2024-01-20
NDTV
Why's our monitor labelling this an incident or hazard?
The video involves an AI system (deepfake technology) generating synthetic media of Nora Fatehi. However, the use is part of a planned awareness campaign to educate viewers about online scams, not to deceive or cause harm. There is no realized harm or violation of rights reported, nor is there a plausible risk of harm stemming from this event as described. Therefore, this is not an AI Incident or AI Hazard. The event provides contextual information about AI-generated content used for social good, fitting the definition of Complementary Information.
Thumbnail Image

Nora Fatehi Falls Victim To Deepfake: Video Of Fashion Brand Promotion Raises Serious Questions

2024-01-20
Oneindia
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (deepfake technology) used to create a fake video impersonating a real person without consent, which is a direct misuse of AI leading to reputational harm and potential financial fraud risks. The harm is realized as the video circulated widely, deceiving viewers and prompting public concern. The involvement of AI in generating the deepfake and the resulting harm to the individual's rights and potential harm to the community through misinformation meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Nora Fatehi latest victim to deepfake videos, issues clarification

2024-01-21
The New Indian Express
Why's our monitor labelling this an incident or hazard?
Deepfake videos are generated using AI systems that synthesize realistic images and voices. The unauthorized use of Nora Fatehi's likeness in a deepfake video promoting a brand without her consent directly infringes on her rights and causes reputational harm. This fits the definition of an AI Incident because the AI system's use has directly led to a violation of rights and harm to the individual involved. The event describes an actual occurrence of harm, not just a potential risk, thus qualifying as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Nora Fatehi reacts after fashion brand uses her lookalike for promotions: 'This is...'

2024-01-21
Daily News and Analysis (DNA) India
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of a deepfake or lookalike video, which is an AI system-generated content impersonating a person. The brand's use of this video without consent and refusal to take it down after being called out causes harm to the individual's rights and reputation. This meets the criteria for an AI Incident as the AI system's use has directly led to a violation of rights (c).
Thumbnail Image

Nora Fatehi latest victim to deepfake videos, issues clarification

2024-01-21
Telangana Today
Why's our monitor labelling this an incident or hazard?
Deepfake videos are generated by AI systems that manipulate visual and audio content to create realistic but fake representations of individuals. The use of such deepfakes without consent infringes on personal rights and can cause harm to the individuals involved. Since the article describes actual deepfake videos already disseminated and the harm to the celebrities' rights and reputations, this qualifies as an AI Incident under violations of human rights or breach of applicable law protecting fundamental rights.
Thumbnail Image

Nora Fatehi speaks out as latest victim to deep fake videos

2024-01-21
english.madhyamam.com
Why's our monitor labelling this an incident or hazard?
The article describes the creation and dissemination of deepfake videos that use AI technology to realistically mimic the appearance, mannerisms, and voice of celebrities without their permission. This unauthorized use of AI-generated content infringes on the individuals' rights and can lead to harm such as reputational damage and misinformation. Since the harm (violation of rights and reputational harm) has already occurred due to the use of these deepfake videos, this qualifies as an AI Incident under the framework.
Thumbnail Image

Nora Fatehi Expresses SHOCK Over Her Deepfake Video Following Alia Bhatt and Rashmika Mandanna; Says 'This Is Not Me' | 🎥 LatestLY

2024-01-20
LatestLY
Why's our monitor labelling this an incident or hazard?
Deepfake videos are generated using AI systems that synthesize realistic images and audio to impersonate individuals. The creation and spread of such a video without consent constitutes a violation of personal rights and can cause reputational harm, misinformation, and potential emotional distress. Since the video is already circulating and the individual has publicly responded, the harm is realized rather than potential. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's misuse in generating the deepfake content.
Thumbnail Image

Nora Fatehi latest victim to deepfake videos, issues clarification

2024-01-21
Social News XYZ
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used to create deepfake videos, which are AI-generated synthetic media. The use of such videos without consent leads to harm in the form of reputational damage and violation of personal rights, fitting the definition of harm to individuals and violation of rights under AI Incident criteria. The direct impact on Nora Fatehi and the legal action taken in a related case further confirm the realized harm and the AI system's pivotal role in causing it.
Thumbnail Image

Nora Fatehi also falls prey to deepfake - Daily Times

2024-01-21
Daily Times
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions a deepfake video, which is a product of AI-based generative technology, creating a realistic but fake video of Nora Fatehi. The harm is realized as it affects the celebrity's reputation and credibility, which falls under harm to communities and individuals. The AI system's use in generating this misleading content directly led to this harm. Hence, this is an AI Incident.
Thumbnail Image

Nora Fatehi's deepfake video goes viral, actress issues clarification

2024-01-21
KalingaTV
Why's our monitor labelling this an incident or hazard?
The event clearly involves AI systems used to generate deepfake videos, which are AI-generated synthetic media. The harm is realized as the videos are viral and misleading, causing reputational harm and misinformation, which falls under harm to communities and violation of rights. Since the AI system's use has directly led to these harms, this qualifies as an AI Incident. The article does not merely warn of potential harm but describes ongoing harm from the viral deepfake videos.
Thumbnail Image

Nora Fatehi falls prey to deepfake, calls out forged video

2024-01-20
en.etemaaddaily.com
Why's our monitor labelling this an incident or hazard?
The deepfake video is created using AI techniques to generate realistic but forged content. The use of AI to produce a fake video of a celebrity promoting a brand without their consent constitutes a violation of personal rights and can cause harm to the individual's reputation. Since the video is already viral and causing harm, this qualifies as an AI Incident under violations of rights and harm to communities.
Thumbnail Image

Nora Fatehi latest victim to deepfake videos, issues clarification - OrissaPOST

2024-01-21
Odisha News, Odisha Latest news, Odisha Daily - OrissaPOST
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake videos, which are AI systems that generate manipulated visual and audio content. The harm is realized as the actress is falsely represented in a video promoting a brand without her consent, which can damage her reputation and violate her rights. The article confirms the presence of harm and the AI system's role in causing it. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

'This is not me': Nora Fatehi confronts brand for spreading her viral 'deepfake' video

2024-01-21
News9live
Why's our monitor labelling this an incident or hazard?
The video is an AI-generated deepfake but was used deliberately as part of an awareness campaign, not causing direct harm. The article discusses the potential dangers of deepfakes and references a related harmful incident, but the main event described is the campaign video itself, which does not constitute an AI Incident. It also does not describe a plausible future harm from this specific video, as it is intended to educate and warn. Therefore, this is best classified as Complementary Information, providing context and societal response to AI deepfake technology and its risks.
Thumbnail Image

Nora Fatehi's 'deepfake' video sparks controversy: Unraveling the ...

2024-01-21
PTC News
Why's our monitor labelling this an incident or hazard?
The video is an AI-generated deepfake but is used intentionally for a positive awareness campaign, not causing harm or violating rights. The article does not describe any harm resulting from this video or any malfunction or misuse of the AI system. The mention of a malicious deepfake arrest is background information. Hence, the event does not meet criteria for AI Incident or AI Hazard but fits Complementary Information as it informs about AI use and societal issues related to deepfakes and fraud awareness.
Thumbnail Image

Nora Fatehi's 'Deepfake' Video Controversy: Everything To Know

2024-01-22
SheThePeople
Why's our monitor labelling this an incident or hazard?
The staged deepfake video by Nora Fatehi is part of an awareness campaign and does not itself cause harm, so it is not an AI Incident. However, the mention of the arrest for creating a malicious deepfake video that harmed an individual indicates that such malicious uses have caused harm, but this is reported as a past event, not a new incident. The article mainly provides context on the issue of deepfakes, awareness efforts, and regulatory calls, which fits the definition of Complementary Information rather than a new AI Incident or AI Hazard.