AI App for Undressing Photos Sparks Privacy Concerns

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Portuguese organizations MiudosSegurosNa.Net and Agarrados à Net have reported an AI app that can undress people in photos to Meta, urging the removal of its ads from Facebook and Instagram. Despite the complaint, the app remains active. The incident raises significant privacy and human rights concerns, prompting calls for regulatory action.[AI generated]

Why's our monitor labelling this an incident or hazard?

The app’s AI system is actively being used to generate explicit images of individuals without consent, clearly violating fundamental rights and personal dignity. The harm is materialized—ads are live, users can download illicit content—so this is an AI Incident involving misuse of generative AI for non-consensual deepfakes.[AI generated]
AI principles
AccountabilityPrivacy & data governanceRespect of human rightsSafetyTransparency & explainability

Industries
Media, social platforms, and marketingConsumer services

Affected stakeholders
General public

Harm types
Human or fundamental rightsPsychologicalReputational

Severity
AI incident

Business function:
Marketing and advertisement

AI system task:
Content generationRecognition/object detection


Articles about this incident or hazard

Thumbnail Image

Organizações portuguesas alertam Meta para aplicação que despe pessoas através de IA

2024-12-02
SAPO
Why's our monitor labelling this an incident or hazard?
The app’s AI system is actively being used to generate explicit images of individuals without consent, clearly violating fundamental rights and personal dignity. The harm is materialized—ads are live, users can download illicit content—so this is an AI Incident involving misuse of generative AI for non-consensual deepfakes.
Thumbnail Image

Associações denunciam app que consegue despir pessoas em fotos através de IA - SAPO Tek

2024-12-02
SAPO Tek
Why's our monitor labelling this an incident or hazard?
The app’s AI system is being used to produce illicit, non-consensual sexual imagery (a violation of personal and human rights). The harm has materialized (people’s privacy and dignity are being directly attacked), making this an AI Incident rather than a potential hazard or mere background update.
Thumbnail Image

Associações denunciam app que consegue despir pessoas em fotos com IA

2024-12-02
Notícias ao Minuto
Why's our monitor labelling this an incident or hazard?
The app explicitly uses AI to remove clothing from photos without consent, facilitating the creation of illicit deepfake nudes. This constitutes a direct violation of personal privacy and human rights, causing real harm. Therefore, it is classified as an AI Incident.
Thumbnail Image

Associações denunciam 'app' que consegue despir pessoas numa fotografia através de Inteligência Artificial

2024-12-02
Correio da Manha
Why's our monitor labelling this an incident or hazard?
An AI system is at the center (the “nudification” app) and while there is no specific incident of harm to individuals reported, the app’s existence and ongoing advertising on Meta platforms create a credible risk of non-consensual deepfake nudity and related rights violations. This describes a potential harm scenario, classifying it as an AI Hazard.
Thumbnail Image

Meta. Associações denunciam aplicação que consegue "despir" pessoas numa foto através de IA

2024-12-02
Publico
Why's our monitor labelling this an incident or hazard?
The core issue is the use of a generative AI system that could plausibly lead to serious harms—non-consensual intimate imagery and privacy/rights violations. No concrete incident of harm is reported yet, but the app’s active deployment and encouragement to commit an illicit act constitutes a credible risk. Therefore, this event qualifies as an AI Hazard.
Thumbnail Image

Associações portuguesas denunciam "app" que usa Inteligência Artificial para "despir pessoas"

2024-12-02
Jornal Expresso
Why's our monitor labelling this an incident or hazard?
The described application uses AI to produce non-consensual sexual imagery, constituting a concrete harm—an ongoing infringement of people’s rights and potential for abuse. This meets the criteria of an AI Incident, as the AI system’s use has directly led to violations of human rights and privacy.
Thumbnail Image

Associações alertam para app que consegue despir pessoas numa fotografia

2024-12-02
ionline
Why's our monitor labelling this an incident or hazard?
The app explicitly uses AI to generate nude images from photos, which is a direct use of AI systems for harmful purposes. The harm includes violations of privacy and human rights, as well as potential psychological and reputational damage to individuals depicted. The event reports that the app is active and being promoted, meaning harm is ongoing or occurring. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm related to rights violations and harm to individuals and communities.
Thumbnail Image

Associações denunciam 'app' que consegue despir pessoas de

2024-12-02
Marketeer
Why's our monitor labelling this an incident or hazard?
The AI system involved is explicitly described as using AI to generate nude images from photos of clothed individuals, which is a direct misuse of AI technology causing harm to individuals' privacy and dignity. The harm is realized as the app is active and advertised, encouraging illicit acts and potentially causing psychological and reputational damage to victims. The event involves the use of AI, the harm is occurring, and the organizations have reported it as a serious issue. Therefore, this qualifies as an AI Incident due to direct harm caused by the AI system's use.
Thumbnail Image

Visão | Organizações portuguesas alertam dona do Facebook e Instagram para app que "despe" pessoas

2024-12-02
Visão
Why's our monitor labelling this an incident or hazard?
The app uses AI to generate nude images from clothed photos, which is a direct violation of individuals' rights and privacy, causing harm to persons. The involvement of AI in generating these images is explicit, and the harm is realized as the app is active and advertised. This fits the definition of an AI Incident because the AI system's use has directly led to violations of human rights and harm to individuals. The complaint to Meta and the lack of response further highlight the ongoing nature of the harm.