Apple and Google App Stores Promote AI 'Nudify' Apps Enabling Nonconsensual Deepfakes

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Apple and Google are under scrutiny after reports revealed their app stores host and promote AI-powered 'nudify' apps that generate nonconsensual sexualized images, violating privacy and human rights. Despite policies prohibiting such content, enforcement gaps allowed millions of downloads and significant revenue, exposing users to harm.[AI generated]

Why's our monitor labelling this an incident or hazard?

The apps use AI systems for image manipulation to generate nonconsensual sexualized images, which directly violates individuals' rights and causes harm. The widespread availability and use of these apps, despite platform policies, have led to actual harm, including privacy violations and potential psychological harm to victims. The involvement of AI in generating these images and the direct link to harm fulfills the criteria for an AI Incident. The article does not merely discuss potential risks or responses but documents ongoing harm caused by AI systems.[AI generated]
AI principles
Privacy & data governanceRespect of human rights

Industries
Media, social platforms, and marketing

Affected stakeholders
General public

Harm types
Human or fundamental rightsPsychologicalReputational

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Apple, Google Offer 'Nudify' Apps Despite Policies Against Them

2026-04-15
Bloomberg Business
Why's our monitor labelling this an incident or hazard?
The apps use AI systems for image manipulation to generate nonconsensual sexualized images, which directly violates individuals' rights and causes harm. The widespread availability and use of these apps, despite platform policies, have led to actual harm, including privacy violations and potential psychological harm to victims. The involvement of AI in generating these images and the direct link to harm fulfills the criteria for an AI Incident. The article does not merely discuss potential risks or responses but documents ongoing harm caused by AI systems.
Thumbnail Image

How vibe coding app Anything is rebuilding after getting booted from the App Store twice | TechCrunch

2026-04-14
TechCrunch
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in the form of vibe coding apps that use AI-powered coding tools to enable app creation. However, the article focuses on Apple's policy enforcement actions and the company's response to being removed from the App Store. There is no indication that the AI systems caused injury, rights violations, property or community harm, or critical infrastructure disruption. The concerns are about potential misuse, but no actual harm or incident has occurred. Therefore, this is best classified as Complementary Information, as it provides context on governance and ecosystem responses to AI-powered app-building tools rather than reporting an AI Incident or Hazard.
Thumbnail Image

App Store, Google Play Store accused of promoting nudify apps through search suggestions: All details

2026-04-16
Digit
Why's our monitor labelling this an incident or hazard?
The apps in question are AI systems capable of generating deepfake nude images, which directly implicate violations of human rights and privacy (a breach of obligations protecting fundamental rights). The promotion of such apps through search suggestions and paid placements facilitates their use, leading to realized harm. The removal of some apps by Apple confirms harm has occurred, qualifying this as an AI Incident rather than a mere hazard or complementary information. The watchdog's call for stronger enforcement supports the ongoing risk but does not negate the existing harm.
Thumbnail Image

App Store search suggestions reportedly steered users to 'nudify' apps - 9to5Mac

2026-04-15
9to5Mac
Why's our monitor labelling this an incident or hazard?
The event involves AI systems that generate deepfake nude images, which directly cause harm by violating individuals' rights and potentially exposing minors to inappropriate content. The AI systems' outputs are central to the harm, and the app stores' AI-driven search and ad systems facilitated user access to these harmful apps, contributing indirectly to the harm. The harm is realized, not just potential, as the apps were available and used to create such content. The developer's admission of using AI for image generation and the subsequent removal of apps by Apple further confirm the AI system's role in causing harm. Thus, this is an AI Incident.
Thumbnail Image

AppleInsider.com

2026-04-15
AppleInsider
Why's our monitor labelling this an incident or hazard?
The event clearly involves AI systems (deepfake and nudify apps) whose use has directly caused harm by enabling the creation and dissemination of nonconsensual explicit content, violating privacy and consent rights. The harm is realized and ongoing, as evidenced by the millions of downloads and revenue generated. The article documents the failure of oversight mechanisms (App Review) to prevent this harm. This fits the definition of an AI Incident because the AI system's use has directly led to violations of human rights and harm to individuals and communities. It is not merely a potential risk or a complementary update but a concrete case of harm caused by AI.
Thumbnail Image

Apple and Google accused of hosting 'Nudify' apps despite strict policies

2026-04-16
News9live
Why's our monitor labelling this an incident or hazard?
The apps use AI-based image-processing to generate nonconsensual sexualized images, which is a violation of human rights and platform policies. The harm is realized as these apps have been downloaded hundreds of millions of times, generating significant revenue and exposing many users to harmful content. The platforms' enforcement loopholes and promotion mechanisms contribute indirectly to the harm. Therefore, the event meets the criteria for an AI Incident due to direct and indirect harm caused by AI system use in violation of rights and policies.