Katrina Kaif's Deepfake Video Sparks Concerns

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A deepfake video of Bollywood actress Katrina Kaif speaking French has gone viral, raising concerns about AI misuse. The video, originating from a 2017 event, uses AI to generate a French voiceover, misleading viewers and potentially violating privacy and intellectual property rights. This incident highlights the growing issue of deepfake technology in media.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event explicitly involves an AI system generating a synthetic voice (AI-generated voiceover) to create a deepfake video. The harm is indirect but real, as the deepfake misleads viewers and can damage the reputation of the actress, constituting harm to the individual and potentially to the community through misinformation. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm through misinformation and reputational damage.[AI generated]
AI principles
Privacy & data governanceRespect of human rightsTransparency & explainabilityAccountabilityRobustness & digital securitySafety

Industries
Media, social platforms, and marketingArts, entertainment, and recreationDigital security

Affected stakeholders
General publicOther

Harm types
Human or fundamental rightsReputationalEconomic/PropertyPublic interest

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Katrina Kaif Becomes Victim Of Deepfake; Actress' Morphed Video Speaking In French Goes Viral

2024-05-01
Jagran English
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system generating a synthetic voice (AI-generated voiceover) to create a deepfake video. The harm is indirect but real, as the deepfake misleads viewers and can damage the reputation of the actress, constituting harm to the individual and potentially to the community through misinformation. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm through misinformation and reputational damage.
Thumbnail Image

Katrina Kaif targeted by deepfake again as manipulated video surfaces; fans say 'AI at its best'

2024-05-01
Economic Times
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems generating deepfake videos that manipulate Katrina Kaif's image and speech, which is a clear example of AI system use. The manipulated content has caused harm by misleading viewers, spreading misinformation, and potentially damaging the individual's reputation, which falls under harm to communities and violation of rights. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to realized harm through the dissemination of manipulated media.
Thumbnail Image

Viral AI Video Shows Katrina Kaif Speaking In French. "Deep Fakes Are Getting Scary," Says The Internet

2024-05-01
NDTV
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (deepfake generation) creating synthetic video content that could mislead viewers. Although the video is AI-generated, the article does not report any realized harm such as defamation, misinformation campaigns, or other direct negative consequences. The concerns raised by users about deepfakes becoming scary reflect a credible potential for future harm, such as misinformation or reputational damage. Hence, this qualifies as an AI Hazard, as the AI system's use could plausibly lead to harm, but no harm has been documented in this case yet.
Thumbnail Image

Katrina Kaif Falls Prey To Deepfake, Speaks Fluent French In Clip With Morphed Audio | Watch - News18

2024-05-01
News18
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used to create deepfake videos that have been widely disseminated, causing misinformation and reputational harm to the individuals involved. The AI-generated content has led to police complaints and FIRs, showing that harm has materialized and is being addressed legally. The harm includes violation of rights (misrepresentation, potential defamation) and harm to communities through misinformation. Since the AI system's use directly led to these harms, this is classified as an AI Incident.
Thumbnail Image

Katrina Kaif Deepfake Controversy: Tiger 3 Actress Speaks Fluent French in Morphed Video

2024-05-01
India News, Breaking News, Entertainment News | India.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly used to generate a deepfake video, which is a form of AI-generated manipulated content. The deepfake has directly led to harm by misleading viewers and potentially damaging the reputation of the individual depicted, which falls under harm to communities and possibly violation of rights. The article confirms the AI-generated nature of the voiceover and the altered video, and the public reaction shows concern about the deceptive nature of the content. Hence, the event meets the criteria for an AI Incident as the AI system's use has directly led to harm.
Thumbnail Image

Katrina Kaif falls prey to deepfake video; is seen speaking French fluently - WATCH

2024-05-01
Asianet News Network Pvt Ltd
Why's our monitor labelling this an incident or hazard?
The event describes the creation and spread of AI-generated deepfake videos that misrepresent individuals, causing harm to their reputation and misleading the public. This constitutes a violation of rights and harm to communities through misinformation. The involvement of AI in generating these videos and the resulting real-world consequences (e.g., police complaints, public outcry) meet the criteria for an AI Incident, as the harm is realized and directly linked to the AI system's use.
Thumbnail Image

After Aamir Khan, Katrina Kaif's deepfake video goes viral

2024-05-01
ARY NEWS
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions deepfake technology, an AI system that manipulates video and audio to create realistic but fake content. The videos have been circulated online, causing confusion and alarm among viewers, and have been used to spread false political messages, which can be considered harm to communities and reputational harm to individuals. The harm is realized, not just potential, as the videos are viral and have caused public concern. Hence, this qualifies as an AI Incident due to the direct use of AI-generated deepfakes causing harm.
Thumbnail Image

Katrina Kaif speaks flawless French in a viral deep fake video, fans ask, 'When will this stop?'

2024-05-01
mid-day
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated deepfake videos and voice cloning technology used to create misleading content involving public figures. The harm includes violation of personal rights and reputational damage, which falls under violations of human rights or breach of obligations protecting fundamental rights. The filing of an FIR by Ranveer Singh confirms that harm has occurred. Hence, this qualifies as an AI Incident due to the direct harm caused by the AI system's misuse.
Thumbnail Image

Katrina Kaif Deepfake Controversy: Actress Effortlessly Speaks French In VIRAL Video; Netizens React, Say, 'These Are Getting Scary'- Watch | SpotboyE

2024-05-03
spotboye.com
Why's our monitor labelling this an incident or hazard?
The video is explicitly described as AI-generated (a deepfake), indicating the involvement of an AI system in content generation. The viral spread of this deepfake has caused confusion and fear among viewers, reflecting harm to communities through misinformation and potential reputational damage to the individual depicted. Since the deepfake is already circulating and causing social concern, this constitutes an AI Incident due to realized harm (harm to communities and possibly to the individual's rights).
Thumbnail Image

Katrina Kaif speaks fluent French in edited video; fans are shocked: 'Deepfakes are getting scary'

2024-05-01
Hindustan Times
Why's our monitor labelling this an incident or hazard?
The event involves an AI system used to generate a synthetic voiceover (deepfake) for a video, which qualifies as an AI system involvement. However, the article does not report any realized harm or incident resulting from this AI use. The AI-generated content is disclosed with a disclaimer, and while some users were momentarily deceived, there is no evidence of significant harm such as misinformation campaigns, reputational damage, or rights violations. Therefore, this event does not meet the criteria for an AI Incident or AI Hazard. It is best classified as Complementary Information because it provides context on AI-generated deepfakes and public reactions, enhancing understanding of AI's societal impact without reporting a new harm or credible risk of harm.
Thumbnail Image

'Deep Fakes Getting Scary': AI Video Shows Katrina Kaif Speaking in French, Internet Concerned

2024-05-01
TimesNow
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (deepfake video generation) and its use to create synthetic media. However, there is no evidence or report of actual harm occurring or any plausible immediate harm resulting from this specific video. The article mainly highlights the existence of the deepfake and public concern, which aligns with general awareness rather than a concrete incident or hazard. Therefore, it does not meet the criteria for AI Incident or AI Hazard. It is best classified as Complementary Information as it provides context and societal reaction to AI-generated deepfakes.
Thumbnail Image

Katrina Kaif's deepfake video of her speaking in French sparks concerns

2024-05-01
WION
Why's our monitor labelling this an incident or hazard?
The AI system (deepfake generation) was used to create a realistic but false video, which led to misinformation and potential harm to the actress's reputation and public perception. The viral spread of the video caused confusion and concern among viewers, indicating harm to the community through misinformation. Since the AI-generated content directly led to misleading information and public confusion, this qualifies as an AI Incident due to harm to communities through misinformation dissemination.
Thumbnail Image

Katrina's deepfake video goes viral

2024-05-01
Daily Times
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI (deepfake technology) to create a manipulated video that misrepresents a public figure, which is actively circulating and causing public concern. This meets the criteria for an AI Incident because the AI system's use has directly led to harm in terms of misinformation, reputational damage, and potential violation of rights. The harm is realized, not just potential, as the video is viral and causing confusion and alarm. Therefore, it is not merely a hazard or complementary information but an incident.
Thumbnail Image

Deepfake Alert: Morphed Video of Katrina Kaif speaking fluent French goes viral, netizens react

2024-05-01
NewsroomPost
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system generating synthetic audio to create a deepfake video, which was then disseminated and believed by the public, causing misinformation and reputational harm. This fits the definition of an AI Incident because the AI system's use directly led to harm to communities (misinformation) and a violation of rights (potentially the actress's rights). The harm is realized, not just potential, as the video went viral and misled viewers.
Thumbnail Image

Katrina Kaif 'Speaking' Fluent French Leaves Fans Astonished, But There's A Catch - Woman's era

2024-05-01
womansera.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of deepfake technology, which is an AI system capable of generating highly realistic manipulated videos. The event involves the use of AI to create deceptive content, which could plausibly lead to harms such as misinformation, defamation, and privacy violations. Since no actual harm is reported as having occurred, but the potential for harm is credible and discussed, this qualifies as an AI Hazard rather than an Incident. The focus is on the risk and implications of deepfake technology rather than a realized harm.
Thumbnail Image

Katrina Kaif's deepfake video shows her speaking French

2024-05-02
The Frontier Post
Why's our monitor labelling this an incident or hazard?
The video is explicitly described as AI-generated deepfake content that misrepresents the celebrity's speech, leading to misinformation and potential reputational harm. The AI system's use in creating this misleading content directly leads to harm to the individual's reputation and could affect public perception, which qualifies as harm to communities or individuals. Therefore, this event constitutes an AI Incident due to realized harm caused by the AI-generated deepfake video.
Thumbnail Image

Ranbir Fanatics Morph Critic's Face to Obscene Images; Katrina Kaif Speaks French in Deepfake Video

2024-05-01
ETV Bharat News
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions AI-generated deepfake content used to harass and cyberbully a woman by morphing her images into obscene content, which is a direct violation of her rights and causes personal harm. The deepfake video of Katrina Kaif, while not directly harmful to her, misleads the public and fans, contributing to misinformation and potential reputational harm. These harms fall under violations of rights and harm to communities. Since the harms are occurring and linked directly to AI system use, this qualifies as an AI Incident.