Meta Platforms Ran Ads Featuring AI-Generated Deepfake Nudes of Underage Jenna Ortega

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Meta's Facebook and Instagram platforms ran ads for the Perky AI app, which used AI to generate and promote deepfake nude images of actress Jenna Ortega when she was 16. The non-consensual, sexualized images of a minor highlight the harm caused by AI misuse and inadequate platform moderation.[AI generated]

Why's our monitor labelling this an incident or hazard?

The AI system (Perky AI) was used to create deepfake nude images of an underage person, which is a clear violation of rights and causes harm to the individual. The ads promoting this app were actively disseminated on Meta's platforms, leading to direct harm. The involvement of AI in generating non-consensual explicit content of minors is a serious harm under the framework, fulfilling the criteria for an AI Incident. The event describes actual harm occurring, not just potential harm, and involves the use and misuse of an AI system.[AI generated]
AI principles
AccountabilityPrivacy & data governanceRespect of human rightsRobustness & digital securitySafetyTransparency & explainability

Industries
Consumer servicesMedia, social platforms, and marketing

Affected stakeholders
Children

Harm types
PsychologicalReputationalHuman or fundamental rights

Severity
AI incident

Business function:
Other

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

AI app used deepfake nude photos of underage Jenna Ortega in Meta ads

2024-03-06
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
The AI system (Perky AI) was used to create deepfake nude images of an underage person, which is a clear violation of rights and causes harm to the individual. The ads promoting this app were actively disseminated on Meta's platforms, leading to direct harm. The involvement of AI in generating non-consensual explicit content of minors is a serious harm under the framework, fulfilling the criteria for an AI Incident. The event describes actual harm occurring, not just potential harm, and involves the use and misuse of an AI system.
Thumbnail Image

Facebook and Instagram allowed ads featuring deepfake nude of Jenna...

2024-03-06
New York Post
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system used to create deepfake nude images of real people, including a minor, which is a violation of rights and constitutes harm to individuals and communities. The AI system's use directly led to the dissemination of harmful content, fulfilling the criteria for an AI Incident under violations of human rights and harm to communities. The removal of the app and ads is a response but does not negate the incident itself.
Thumbnail Image

Underage picture of Jenna Ortega used in 'no clothes' deepfake app ad on Instagram, Facebook

2024-03-05
NBC News
Why's our monitor labelling this an incident or hazard?
The Perky AI app uses artificial intelligence to generate nonconsensual sexually explicit images, including of a minor (Jenna Ortega at age 16). This directly violates human rights and legal protections against child sexual exploitation and nonconsensual pornography. The AI system's use has caused harm to the individuals depicted and potentially to communities by normalizing and spreading such content. The event meets the criteria for an AI Incident because the AI system's use has directly led to harm (violation of rights and harm to individuals). The involvement of AI is explicit, and the harm is realized, not just potential. Therefore, this is classified as an AI Incident.
Thumbnail Image

Deepfake ads featuring Jenna Ortega ran on Meta platforms. Big Tech needs to fight this.

2024-03-06
Mashable
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Perky AI) that generates realistic deepfake images, including non-consensual sexualized images of a minor, which were distributed through ads on major social media platforms. This directly led to harm by violating the rights of the individual depicted (a minor) and contributing to the spread of harmful content targeting vulnerable groups. The incident includes actual harm (sexual abuse via image manipulation and distribution), not just potential harm, meeting the criteria for an AI Incident. The involvement of AI in generating the manipulated images and the resulting harm to rights and communities is explicit and central to the event.
Thumbnail Image

Deepfake ads featuring Jenna Ortega ran on Meta platforms. Big Tech needs to fight this.

2024-03-06
Mashable SEA
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Perky AI) generating deepfake images that sexualize a minor without consent, which is a clear violation of human rights and causes harm to the individual and communities. The AI-generated content was actively distributed on Meta platforms, leading to realized harm. This fits the definition of an AI Incident because the AI system's use directly led to harm (violation of rights and harm to the individual and community). The article also discusses responses but the primary focus is on the incident itself, not just the response, so it is not Complementary Information.
Thumbnail Image

Underage Jenna Ortega Deepfake Ads That Let Users 'Undress' Her Ran on Instagram and Facebook

2024-03-06
Complex
Why's our monitor labelling this an incident or hazard?
The event describes the use of an AI-powered app to create and distribute deepfake images that sexualize a person who was a minor when the original photo was taken. The AI system's outputs were used in ads on major social media platforms, leading to direct harm through the creation and dissemination of non-consensual, sexualized images of a minor, which is a clear violation of rights and applicable laws. The involvement of AI in generating these images and the resulting harm meets the criteria for an AI Incident under the OECD framework.
Thumbnail Image

Facebook and Instagram Ran Deepfake Nude Ads of a 16-Year-Old Jenna Ortega

2024-03-06
PetaPixel
Why's our monitor labelling this an incident or hazard?
The AI system (Perky AI) was used to create explicit deepfake images of a 16-year-old Jenna Ortega, which were then advertised on major social media platforms. This is a clear violation of rights, specifically the sexualization of a minor and nonconsensual use of her likeness, which is a breach of applicable laws protecting minors and fundamental rights. The involvement of AI in generating the harmful content and its distribution through ads directly caused harm, meeting the criteria for an AI Incident.
Thumbnail Image

Jenna Ortega AI Deepfake Attack: Meta Incapable of Stopping Doctored Images?

2024-03-06
CCN - Capital & Celeb News
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system generating deepfake images used in sexually explicit content without consent, which is a violation of rights and causes harm to the individual depicted. The AI system's use directly led to the harm, and the platform's failure to prevent dissemination exacerbates the issue. The harm is realized, not just potential, and involves violations of legal and human rights protections. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

As underage deepfakes of Jenna Ortega appear online, why isn't social media taking deepfake AI seriously?

2024-03-06
Glamour UK
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (deepfake AI apps) used to create harmful, non-consensual sexualized images of real people, including underage individuals, which constitutes a violation of rights and harm to communities. The AI's use has directly led to the dissemination of harmful content, fulfilling the criteria for an AI Incident. The involvement of social media platforms in allowing ads for such apps further implicates the AI system's use in causing harm. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Facebook and Instagram Ran Ads for AI App Featuring Deepfake Nude Images of Jenna Ortega When She Was 16 Years Old

2024-03-06
International Business Times, Singapore Edition
Why's our monitor labelling this an incident or hazard?
The Perky AI app is an AI system capable of generating realistic deepfake images based on user prompts. The event describes actual harm caused by the AI system's use: non-consensual creation and distribution of explicit images of individuals, including a minor, which is a violation of rights and harmful to the persons involved. The advertisements on major social media platforms facilitated this harm. Therefore, this qualifies as an AI Incident due to direct harm caused by the AI system's use.
Thumbnail Image

"Wednesday", în mijlocul unui scandal pe Facebook. Cum au ajuns imaginile compromițătoare, realizate cu tehnologia deepfake, pe rețelele de socializare

2024-03-09
PLAYTECH.ro
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Perky AI) that generates deepfake images, which are explicitly sexualized and depict a minor, constituting a violation of rights and causing harm. The AI system's use directly led to the creation and dissemination of harmful content, fulfilling the criteria for an AI Incident. The involvement of Meta platforms in allowing the ads to run further connects the AI system's outputs to the harm. The harm is realized, not just potential, as the images were publicly distributed and caused reputational and personal harm to the individual depicted.
Thumbnail Image

Jenna Ortega, vedeta serialului "Wednesday" de pe Netflix, victima unei campanii "deepfake" cu imagini nud pe Facebook și Instagram

2024-03-07
HotNews.ro
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly mentioned as generating deepfake nude images without consent, which is a direct violation of personal rights and causes harm to the individual targeted. The distribution of these images on Facebook and Instagram amplifies the harm to the community and the victim. This fits the definition of an AI Incident because the AI system's use directly led to harm (violation of rights and harm to the individual and community). The article also discusses the platforms' responses, but the primary focus is on the harm caused by the AI-generated content.
Thumbnail Image

Jenna Ortega, vedeta serialului "Wednesday" de pe Netflix, victima unei campanii "deepfake" cu imagini nud pe Facebook și Instagram

2024-03-07
UNIMEDIA
Why's our monitor labelling this an incident or hazard?
The article describes a clear AI Incident where an AI-powered application was used to create and disseminate non-consensual deepfake sexual images of Jenna Ortega, a real person. This use of AI directly led to harm in terms of violation of rights and potential psychological and reputational damage. The involvement of AI in generating explicit fake images and the resulting harm to the individual fits the definition of an AI Incident under violations of human rights and harm to communities. The article also mentions platform responses but the primary event is the realized harm caused by the AI system's use.
Thumbnail Image

Wednesday în deepfake Jenna Ortega victima unei campanii scandaloase

2024-03-07
Puterea.ro
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as generating deepfake images that have been distributed without consent, causing direct harm to Jenna Ortega's privacy and dignity, which constitutes a violation of human rights. The distribution of manipulated explicit images is a clear harm to the individual and the community by fostering abuse and misinformation. The involvement of the AI system in creating these images and the resulting harm meets the criteria for an AI Incident. The article also mentions platform responses, but the primary focus is on the harm caused by the AI-generated content, not on complementary information or potential future harm.
Thumbnail Image

Jenna Ortega, vedeta serialului 'Wednesday', victima unei campanii 'deepfake' cu imagini nud pe rețelele de socializare

2024-03-09
Stiripesurse.md
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system used to generate deepfake images without consent, which were distributed widely on social media, causing harm to the victim's rights and dignity. This meets the criteria for an AI Incident because the AI system's use directly led to violations of rights and harm to the community. The involvement of AI in creating manipulated explicit content and its distribution on major platforms confirms the classification as an AI Incident rather than a hazard or complementary information.