YouTube removes over 1,000 AI-generated celebrity deepfake scam ads

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Google-owned YouTube has removed over 1,000 AI-generated deepfake scam videos featuring celebrities like Taylor Swift, Steve Harvey, and Joe Rogan promoting Medicare fraud. The ads, viewed nearly 200 million times, emerged from an advertising ring uncovered by 404 Media. Additionally, non-consensual deepfake porn of Taylor Swift reached 45 million views on X.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves AI systems generating deepfake videos of celebrities used to deceive users into scams, which is a direct harm to individuals (harm to communities and individuals through fraud). The AI-generated content was actively used and viewed by millions, causing realized harm. YouTube's removal of these videos is a response to an AI Incident involving misuse of AI-generated content for fraudulent purposes.[AI generated]
AI principles
AccountabilityPrivacy & data governanceRespect of human rightsRobustness & digital securitySafetyTransparency & explainabilityHuman wellbeing

Industries
Media, social platforms, and marketingHealthcare, drugs, and biotechnologyDigital securityGovernment, security, and defence

Affected stakeholders
ConsumersGeneral publicOther

Harm types
Economic/PropertyReputationalHuman or fundamental rightsPsychologicalPublic interest

Severity
AI incident

Business function:
Marketing and advertisement

AI system task:
Content generationOrganisation/recommenders

In other databases

Articles about this incident or hazard

Thumbnail Image

YouTube deletes 1,000 videos of 'celebrity ads', here's why | - Times of India

2024-01-27
The Times of India
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating deepfake videos of celebrities used to deceive users into scams, which is a direct harm to individuals (harm to communities and individuals through fraud). The AI-generated content was actively used and viewed by millions, causing realized harm. YouTube's removal of these videos is a response to an AI Incident involving misuse of AI-generated content for fraudulent purposes.
Thumbnail Image

YouTube removes over 1,000 videos of celebrity AI scam ads - ET CISO

2024-01-27
ETCISO.in
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating deepfake videos of celebrities used in scam advertisements, which have been viewed nearly 200 million times, indicating widespread harm through fraud and misinformation. The use of AI-generated non-consensual explicit content also constitutes harm to individuals' rights and reputations. The AI system's use directly led to these harms, qualifying this as an AI Incident. The platform's removal of videos is a mitigation response but does not negate the occurrence of harm.
Thumbnail Image

YouTube Removes Over 1,000 Deepfake, AI-Generated Scam Ad Videos

2024-01-27
IndiaTimes
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used to create deepfake videos that were actively promoting scams, leading to harm to individuals and communities through deception and fraud. The harm is realized as these videos had millions of views and were removed only after investigation. The AI system's use in generating these videos is central to the incident. Hence, it meets the criteria for an AI Incident as the AI system's use directly led to harm.
Thumbnail Image

YouTube deletes 1,000 scam videos with AI-generated celebrity ads

2024-01-26
Mashable ME
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly through the use of deepfake technology to create fraudulent videos. The harm is realized as these videos were used to scam viewers, which is a direct harm to individuals and communities. The large scale of views and the nature of the scam demonstrate significant harm. Therefore, this is an AI Incident because the AI system's use directly led to harm through deceptive and fraudulent content dissemination.
Thumbnail Image

YouTube Removes Over 1,000 AI-Generated Videos Including Celebrity 'Sex Videos' And Scam Ads; Details

2024-01-26
Jagran English
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating deepfake videos that have been used maliciously to create fraudulent ads and non-consensual sexual content, causing direct harm to individuals' rights and communities. The presence of AI is explicit in the generation of deepfake content. The harms are realized, not just potential, as the videos have been viewed millions of times and caused public concern. Therefore, this qualifies as an AI Incident due to direct harm caused by AI-generated content.
Thumbnail Image

Crackdown on Deepfake: YouTube takes down 1000 AI-driven celebrity scam ads

2024-01-27
Asianet News Network Pvt Ltd
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating deepfake videos that have directly led to harm by spreading deceptive scam advertisements and sexually explicit content, causing harm to individuals (celebrities and users) and communities through misinformation and harassment. The AI system's use in creating these misleading and harmful videos fulfills the criteria for an AI Incident, as the harm is realized and the AI's role is pivotal. The platform's policy updates and removals are responses to these incidents, but the primary event is the occurrence of harm due to AI misuse.
Thumbnail Image

Deepfake scams crackdown: YouTube deletes 1,000 scam videos with AI-generated celebrity ads

2024-01-26
India TV News
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly through the use of deepfake technology to create fraudulent celebrity ads. The AI-generated content was used maliciously to promote Medicare scams, directly causing harm to people by misleading them and potentially causing financial loss. The removal of these videos is a response to an ongoing AI Incident involving harm to communities and individuals. The presence of AI-generated deepfakes as a tool for scams fits the definition of an AI Incident because the AI system's use directly led to harm. The article also mentions the broader challenge of AI misuse but focuses on the realized harm from these scam videos, not just potential future harm.
Thumbnail Image

YouTube removes over 1,000 videos of celebrity AI scam ads

2024-01-26
The New Indian Express
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used to create deepfake videos of celebrities promoting scams, which directly led to harm by deceiving viewers and facilitating fraud. The AI-generated content caused harm to individuals (scam victims) and communities by spreading misinformation and fraudulent schemes. Therefore, this qualifies as an AI Incident due to realized harm caused by the AI system's use in scam ads.
Thumbnail Image

YouTube removes over 1,000 videos of celebrity AI scam ads

2024-01-26
The Tribune
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used to generate deepfake videos of celebrities promoting scams, which directly caused harm to users through fraudulent advertising and misinformation. The large scale of views and complaints indicates significant impact. The non-consensual deepfake pornographic content also constitutes a violation of rights and harm to individuals. YouTube's removal of these videos is a response to an ongoing AI Incident involving misuse of AI-generated content causing harm to communities and individuals.
Thumbnail Image

YouTube Deepfake: Google-Owned Platform Deletes Over 1,000 Celebrity AI Scam Ad Videos | 📲 LatestLY

2024-01-26
LatestLY
Why's our monitor labelling this an incident or hazard?
The event describes AI-generated deepfake videos used in scam advertisements, which have caused harm by misleading the public and facilitating fraudulent activities. The AI system's role in generating these videos is central to the harm, fulfilling the criteria for an AI Incident due to violations of rights (non-consensual use of celebrity likeness) and harm to communities (scam victims). The platform's removal of these videos is a mitigation response but does not negate the occurrence of harm.
Thumbnail Image

YouTube removes over 1,000 videos of celebrity AI scam ads - Weekly Voice

2024-01-26
Weekly Voice
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems generating deepfake videos of celebrities used in scam advertisements, which have been viewed by millions and have caused harm by deceiving users and facilitating scams (harm to communities). The non-consensual deepfake pornographic content also represents a violation of rights and harm to individuals. The AI system's use in creating and spreading these videos directly led to these harms. YouTube's removal of the videos is a mitigation response but does not negate the occurrence of harm. Hence, this is classified as an AI Incident.
Thumbnail Image

YouTube Deletes 1,000 Videos of Celebrity AI Scam Ads

2024-01-25
404 Media
Why's our monitor labelling this an incident or hazard?
The event describes AI-generated deepfake videos used to promote scams, which directly harm communities by spreading fraudulent content and misleading the public. The AI system's use in creating these videos is central to the harm, fulfilling the criteria for an AI Incident. The deletion of videos is a response but does not negate the incident classification.
Thumbnail Image

YouTube ने डिलीट किए 1000 सेलिब्रिटी ऐड वीडियो, आखिर क्या है इसके पीछे की वजह - Youtube deleted 1000 celebrity ads know the reason behind it

2024-01-27
दैनिक जागरण (Dainik Jagran)
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to generate deepfake videos of celebrities, which were used to deceive users into scams. This misuse of AI has directly led to harm by misleading people and potentially causing financial or other damages. Therefore, this qualifies as an AI Incident due to realized harm caused by AI-generated deceptive content.
Thumbnail Image

Taylor Swift की लीक हुइ डीपफेक तस्वीर पर माइक्रोसॉफ्ट के CEO ने ताेड़ी चुप्पी, कह डाली यह बड़ी बात

2024-01-27
Prabhat Khabar - Hindi News
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system generating a deepfake image, which is a known AI application. The viral spread of this manipulated content has caused harm to the individual depicted (Taylor Swift) and distress among her fans, indicating harm to communities and potential violation of rights. The Microsoft CEO's reaction underscores the seriousness of the incident. Therefore, this is an AI Incident due to realized harm caused by the AI-generated deepfake.
Thumbnail Image

Deepfake Videos पर YouTube का बड़ा एक्शन, हटाए हज़ारों वीडियो

2024-01-26
punjabkesari
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (deepfake technology using AI/deep learning) that have been used to create harmful content (deepfake videos) leading to reputational harm and misinformation, which qualifies as harm to communities and individuals. The videos have been widely viewed and caused harm, fulfilling the criteria for an AI Incident. YouTube's removal of these videos is a mitigation response but does not negate the fact that the AI system's use has already caused harm. Therefore, this is classified as an AI Incident.
Thumbnail Image

Taylor Swift: टेलर स्विफ्ट की अश्लील फोटोज से अमेरिकी संसद में हलचल

2024-01-27
Inkhabar
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions deepfake technology, which is an AI system capable of generating realistic fake images. The creation and viral spread of non-consensual explicit deepfake images constitute a direct harm to the individual involved, violating rights and causing reputational and emotional harm. This meets the criteria for an AI Incident as the AI system's misuse has directly led to harm. The political response and calls for legislation are complementary information but do not negate the incident classification.
Thumbnail Image

Celebrity AI Scam: यूट्यूब ने सेलिब्रिटी एआई स्कैम वाले 1,000 से अधिक वीडियो हटाए | 📲 LatestLY हिन्दी

2024-01-26
LatestLY हिन्दी
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used to create deepfake videos of celebrities without consent, which were used in scams and non-consensual explicit content dissemination. This directly led to harm including violation of rights (privacy, consent), reputational damage, and potential psychological harm. The large scale of distribution and user complaints confirm the harm has occurred. Hence, it meets the criteria for an AI Incident as the AI system's use directly caused significant harm.
Thumbnail Image

एक्शन मोड में गूगल, YouTube से हटाए सेलिब्रिटीज के डीपफेक वाले हजारों वीडियो

2024-01-26
TV9 Bharatvarsh
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used to generate deepfake videos, which are manipulated media created using AI technology. The widespread dissemination of these videos on YouTube and other platforms has caused harm to individuals' reputations and privacy, constituting violations of rights and harm to communities. The removal of these videos by YouTube is a response to an ongoing AI Incident where the AI-generated content has already caused harm. Hence, the event meets the criteria for an AI Incident due to the direct harm caused by AI-generated deepfake content.
Thumbnail Image

Deepfake से अमेरिका भी परेशान, मशहूर सिंगर की वायरल तस्वीर के बाद कानून बनाने की उठी मांग

2024-01-26
आज तक
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used to create deepfake images, which have been widely shared and caused harm to individuals' privacy and reputation, fulfilling the criteria for an AI Incident under violations of rights and harm to communities. The article reports actual viral deepfake content causing harm, not just potential risk, and the political response is a complementary aspect. Therefore, this is an AI Incident due to realized harm from AI-generated deepfakes.
Thumbnail Image

Taylor Swift हुईं Deepfake की शिकार, हॉलीवुड भी आया निशाने पर

2024-01-27
News24 Hindi
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to generate deepfake images, which are AI-generated synthetic media. The misuse of these AI-generated images has directly led to harm in terms of privacy violations and reputational damage to the individuals depicted, including Taylor Swift. The incident has caused social harm and legal concerns, fitting the definition of an AI Incident due to violation of rights and harm to individuals. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Taylor Swift का आपत्तिजनक AI वीडियो वायरल, एलन मस्क ने उठाया ये बड़ा कदम

2024-01-27
TV9 Bharatvarsh
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake images that are objectionable and harmful to Taylor Swift's reputation and privacy, which is a violation of rights. The content was widely disseminated and viewed, causing harm to the community and the individual. The platform's removal of the content and keyword bans are responses to this harm. Therefore, this is an AI Incident as the AI system's misuse directly led to harm.
Thumbnail Image

Taylor Swift की अश्लील तस्वीरें वायरल होने से संसद में हलचल, Deepfake AI के खिलाफ कानून बनाने की उठी मांग - Taylor Swift Deepfake Photos Viral White House Spokeperson And US Politicians demands new law against Deepfake AI

2024-01-27
दैनिक जागरण (Dainik Jagran)
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (deepfake AI) to create and spread manipulated explicit images of a public figure, which directly leads to harm in terms of privacy violation and reputational damage. This meets the criteria for an AI Incident as the AI system's use has directly led to harm. Additionally, the political and social response demanding new laws is complementary information but the main event is the harm caused by the deepfake images. Therefore, the classification is AI Incident.