AI-Generated Deepfake Pornography of Female Celebrities Sold on eBay

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Thousands of AI-generated sexually explicit deepfake images of at least 40 female celebrities, including Taylor Swift and Selena Gomez, were found for sale on eBay. Despite platform policies, top-rated sellers profited from these non-consensual images until media exposure prompted eBay to remove content and suspend accounts.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves the use of AI systems to generate non-consensual deepfake sexual images, which are then sold and distributed, directly causing harm to the celebrities depicted by violating their rights and privacy. This fits the definition of an AI Incident because the AI system's use has directly led to violations of human rights and harm to communities. The article details the harm occurring, the platform's response, and legislative efforts, confirming the incident status rather than a mere hazard or complementary information.[AI generated]
AI principles
AccountabilityPrivacy & data governanceRespect of human rightsSafetyTransparency & explainabilityHuman wellbeingFairness

Industries
Media, social platforms, and marketing

Affected stakeholders
Women

Harm types
PsychologicalReputationalHuman or fundamental rights

Severity
AI incident

Business function:
Other

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

El mercado negro de eBay: Así venden desnudos falsos de Taylor Swift, Selena Gómez y más celebridades

2024-03-13
infobae
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to generate non-consensual deepfake sexual images, which are then sold and distributed, directly causing harm to the celebrities depicted by violating their rights and privacy. This fits the definition of an AI Incident because the AI system's use has directly led to violations of human rights and harm to communities. The article details the harm occurring, the platform's response, and legislative efforts, confirming the incident status rather than a mere hazard or complementary information.
Thumbnail Image

eBay vende imágenes sexualmente explícitas de Taylor Swift o Selena Gómez generadas por inteligencia artificial

2024-03-15
20 minutos
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems used to create deepfake pornographic images, which have been distributed and sold, causing direct harm to the privacy, honor, and image rights of the individuals depicted. This constitutes a violation of human rights and personal data protection laws, fulfilling the criteria for an AI Incident. The harm is actual and ongoing, not merely potential, and the AI system's role is pivotal in generating the harmful content. Therefore, this event is classified as an AI Incident.
Thumbnail Image

Fotos porno de Taylor Swift generadas con IA aparecen en una tienda en línea

2024-03-15
Noticias SIN
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems generating sexually explicit images of real public figures without consent, which is a violation of their rights and constitutes harm to individuals. The sale and distribution of these images on eBay directly cause harm to the depicted persons, fulfilling the criteria for an AI Incident under violations of human rights and harm to communities. The platform's ongoing presence of such content despite removal efforts further supports the classification as an incident rather than a hazard or complementary information.
Thumbnail Image

Desnudos con IA de celebridades como Margot Robbie y Selena Gomez están a la venta en eBay

2024-03-13
Forbes México
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating non-consensual pornographic deepfake images of celebrities, which are being sold on eBay. This directly leads to harm by violating the depicted individuals' rights and dignity, constituting a breach of fundamental rights. The AI's role is pivotal as it enables the creation of realistic fake images that facilitate this abuse. The harm is realized and ongoing, not merely potential. Hence, this qualifies as an AI Incident under the framework's criteria for violations of human rights and harm to communities.
Thumbnail Image

Ebay tiene a la venta en su plataforma imágenes explícitas generadas con IA de mujeres famosas

2024-03-13
Business Insider
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of generative AI to create explicit deepfake images of women celebrities without their consent, which are being sold and circulated on major platforms. This constitutes a violation of rights and exploitation, fulfilling the criteria for harm under human rights violations. The AI system's use directly leads to this harm, making it an AI Incident. The article also discusses responses and policy updates, but the primary focus is on the realized harm caused by the AI-generated content.