Italian Privacy Authority Blocks AI Deepfake App Clothoff

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

The Italian Data Protection Authority urgently blocked Clothoff, an AI app that generates deepfake nude images and videos without consent, including of minors. The app, operated from the British Virgin Islands, posed serious risks to privacy, dignity, and fundamental rights, prompting regulatory intervention and a broader investigation into similar AI nudification apps.[AI generated]

Why's our monitor labelling this an incident or hazard?

The app Clothoff employs generative AI to produce deepfake nude images and videos without consent, directly violating individuals' rights to privacy and dignity, constituting a breach of fundamental rights protected by law. The Italian Data Protection Authority's urgent blocking of the app indicates that harm has occurred or is ongoing. The involvement of AI in generating harmful content and the resulting violation of rights qualifies this event as an AI Incident under the framework, as the AI system's use has directly led to harm to persons and communities.[AI generated]
AI principles
Privacy & data governanceRespect of human rightsSafetyAccountability

Industries
Digital securityMedia, social platforms, and marketingConsumer services

Affected stakeholders
ChildrenGeneral public

Harm types
Human or fundamental rightsReputationalPsychological

Severity
AI incident

Business function:
Other

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

L'app che spoglia le persone bloccata dal Garante: "Foto false senza avviso né consenso"

2025-10-03
La Repubblica.it
Why's our monitor labelling this an incident or hazard?
The app Clothoff employs generative AI to produce deepfake nude images and videos without consent, directly violating individuals' rights to privacy and dignity, constituting a breach of fundamental rights protected by law. The Italian Data Protection Authority's urgent blocking of the app indicates that harm has occurred or is ongoing. The involvement of AI in generating harmful content and the resulting violation of rights qualifies this event as an AI Incident under the framework, as the AI system's use has directly led to harm to persons and communities.
Thumbnail Image

Clothoff, l'app che toglie i vestiti e crea nudi, bloccata dal Garante della Privacy: "Allarme sociale"

2025-10-03
Corriere della Sera
Why's our monitor labelling this an incident or hazard?
The Clothoff app is an AI system that generates deepfake nude images without consent, which is a violation of privacy and human rights. The harm is realized as the app enables non-consensual explicit content creation, including involving minors, which is socially harmful and legally problematic. The intervention by the Privacy Authority confirms the recognition of harm. Therefore, this event qualifies as an AI Incident due to direct harm caused by the AI system's use.
Thumbnail Image

Dal Garante della Privacy stop a "Clothoff", l'App che spoglia le persone

2025-10-03
Tgcom24
Why's our monitor labelling this an incident or hazard?
The app 'Clothoff' uses AI generative systems to create deepfake nude images of real people without consent, which constitutes a violation of personal rights and privacy. This is a clear breach of fundamental rights protected by law, specifically the right to privacy and protection of personal data. The intervention by the privacy authority indicates that harm is occurring or imminent due to the app's use. Therefore, this event qualifies as an AI Incident because the AI system's use has directly led to violations of human rights and personal data protection laws.
Thumbnail Image

Garante: stop a Clothoff, l'app che spoglia le persone - Notizie - Ansa.it

2025-10-03
ANSA.it
Why's our monitor labelling this an incident or hazard?
The app Clothoff uses generative AI to create fake nude images and videos of real people without consent, including minors, which directly harms the dignity, privacy, and fundamental rights of those depicted. The Italian data protection authority's urgent action and investigation confirm the realized harm and legal violations. The AI system's use has directly led to violations of rights and social harm, meeting the criteria for an AI Incident under the OECD framework.
Thumbnail Image

Garante, stop a Clothoff l'app che crea deepfake porno - Software e App

2025-10-03
ANSA.it
Why's our monitor labelling this an incident or hazard?
The Clothoff app is an AI system generating deepfake content that impersonates real individuals in explicit scenarios without consent, including minors. This constitutes a violation of human rights and data protection laws, causing harm to the dignity and privacy of affected persons. The Italian data protection authority's urgent order to limit data processing and investigation confirms the materialized harm. Therefore, this event qualifies as an AI Incident due to direct harm caused by the AI system's use.
Thumbnail Image

Garante, stop a Clothoff, l'app che spoglia le persone - Notizie - Ansa.it

2025-10-03
ANSA.it
Why's our monitor labelling this an incident or hazard?
The app Clothoff uses AI generative technology to create deepfake nude images, which constitutes an AI system under the definitions. The use of this AI system has directly caused harm by violating personal data protection, privacy, and dignity, especially affecting minors and women, which are fundamental rights violations. The article references actual incidents and social alarm, confirming that harm is occurring. Therefore, this qualifies as an AI Incident due to direct harm caused by the AI system's use.
Thumbnail Image

Il Garante blocca Clothoff, l'app che crea nudi falsi con l'Ai

2025-10-03
Il Fatto Quotidiano
Why's our monitor labelling this an incident or hazard?
The app Clothoff uses AI generative technology to create deepfake nude images and videos without consent, including of minors, which constitutes a violation of privacy and personal dignity rights. The blocking by the privacy authority is a response to these harms already occurring or highly likely to occur. The AI system's use directly leads to violations of human rights and privacy, fulfilling the criteria for an AI Incident. The event is not merely a potential risk or a governance update but concerns realized harm and regulatory intervention.
Thumbnail Image

AI: Garante Privacy blocca la app Clothoff che "spoglia" le persone - Il Sole 24 ORE

2025-10-03
Il Sole 24 ORE
Why's our monitor labelling this an incident or hazard?
The Clothoff app is an AI system generating synthetic nude images and videos without consent, directly leading to violations of personal rights and dignity, especially concerning minors. This constitutes harm to individuals' rights and communities, fulfilling the criteria for an AI Incident. The blocking by the privacy authority confirms the harm is recognized and ongoing, not merely potential. Therefore, this event is classified as an AI Incident due to realized harm caused by the AI system's use.
Thumbnail Image

Il Garante blocca l'app Clothoff: permetteva di spogliare le foto di persone reali con l'IA

2025-10-03
Fanpage
Why's our monitor labelling this an incident or hazard?
The app Clothoff uses AI generative technology to create fake nude images of real people without consent, which constitutes a violation of fundamental rights such as privacy, dignity, and data protection. The article explicitly states that the app's use has caused harm, including reputational damage and violations of rights, especially concerning minors. The involvement of AI in generating harmful content that directly impacts individuals' rights and freedoms fits the definition of an AI Incident. The blocking by the Garante is a regulatory response to an ongoing AI Incident involving harm caused by the AI system's use.
Thumbnail Image

Il Garante della Privacy ha bloccato Clothoff, l'app che crea deep nude e spoglia le persone

2025-10-04
Hardware Upgrade - Il sito italiano sulla tecnologia
Why's our monitor labelling this an incident or hazard?
The app Clothoff uses generative AI to create deepfake nude images without consent, directly implicating privacy violations and potential harm to individuals, including minors. This constitutes a violation of fundamental rights and privacy, which fits the definition of an AI Incident due to realized harm from the AI system's use. The blocking by the privacy authority is a response to this harm. The presence of AI is explicit, the harm is direct (privacy violation and potential misuse), and the event involves the use of the AI system. Therefore, this is classified as an AI Incident.
Thumbnail Image

Il Garante blocca Clothoff, l'app che spoglia le persone

2025-10-03
AGI
Why's our monitor labelling this an incident or hazard?
Clothoff is an AI generative system explicitly described as producing deepfake nude images and videos without consent, including of minors, which constitutes a violation of fundamental rights and personal dignity. The Italian authority's urgent block indicates that harm is occurring or imminent. The AI system's use directly leads to violations of human rights and harms communities through social alarm and abuse. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to realized harm.
Thumbnail Image

L'app che spoglia le persone è stata "bloccata" dal Garante Privacy: stop a Clothoff

2025-10-03
DDay.it
Why's our monitor labelling this an incident or hazard?
The app Clothoff is an AI system that generates manipulated images (deep fakes) using AI. Its use has directly led to violations of fundamental rights, including privacy and dignity, and poses harm especially to minors. The Garante's intervention is a regulatory response to these harms already occurring, indicating that the AI system's use has caused actual harm. Hence, this event meets the criteria for an AI Incident as the AI system's use has directly led to harm and legal violations.
Thumbnail Image

Il Garante Privacy blocca Clothoff, l'app che crea deepfake nudi

2025-10-03
IlSoftware.it
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of sophisticated AI models to generate deepfake nude images without consent, causing serious privacy violations and harm to individuals, including vulnerable groups like minors. This constitutes a breach of fundamental rights and data protection laws, fulfilling the criteria for an AI Incident. The intervention by the regulatory authority is a response to these realized harms, not merely a potential risk or complementary information. Therefore, the event is classified as an AI Incident due to the direct harm caused by the AI system's use.
Thumbnail Image

Deepfake: Garante della privacy blocca Clothoff

2025-10-03
Punto Informatico
Why's our monitor labelling this an incident or hazard?
Clothoff is an AI system generating deepfake images, which directly involves AI. The misuse and lack of safeguards have led to violations of data protection laws and risks to fundamental rights, including dignity and privacy, which are harms under the framework. The authority's intervention and blocking of data processing confirm that harm is occurring or imminent. Therefore, this event qualifies as an AI Incident due to realized harm to rights and potential harm to individuals.
Thumbnail Image

Il Garante blocca Clothoff: l'app che "spoglia" le persone con l'IA

2025-10-03
Affari Italiani
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as using generative AI to create deepfake nude images and videos, which directly leads to violations of human rights and harms to individuals' dignity and privacy. The harms are realized and documented, including abuse of minors and non-consensual use of images, fulfilling the criteria for an AI Incident. The regulatory action and investigation further confirm the materialized harm caused by the AI system's use.
Thumbnail Image

Il Garante della Privacy riveste Clothoff, l'AI che denuda le persone - Startmag

2025-10-03
Startmag
Why's our monitor labelling this an incident or hazard?
Clothoff is an AI system that generates realistic nude images and videos from user-provided photos, including those of minors, without consent or safeguards. This directly causes harm by violating privacy, dignity, and data protection rights, and risks defamation and extortion. The regulatory intervention confirms the recognition of these harms. Therefore, this event qualifies as an AI Incident because the AI system's use has directly led to violations of human rights and significant harm to individuals and communities.
Thumbnail Image

Garante privacy: stop a Clothoff, l'app che spoglia le persone - Agenparl

2025-10-03
Agenparl
Why's our monitor labelling this an incident or hazard?
Clothoff is an AI generative system producing deepfake nude images and videos without consent, which constitutes a violation of fundamental rights, privacy, and personal dignity. The app's operation has caused realized harm to individuals and society, including minors, as evidenced by the regulatory authority's urgent action and the social alarm reported. Therefore, this qualifies as an AI Incident due to direct harm caused by the AI system's use.
Thumbnail Image

Garante: stop a Clothoff, l'app che spoglia le persone

2025-10-03
TargatoCN
Why's our monitor labelling this an incident or hazard?
The Clothoff app employs generative AI to produce deepfake nude images and videos without consent, including of minors, directly infringing on privacy, dignity, and data protection rights. The Italian data protection authority's urgent action and the mention of social alarm and recent national incidents confirm that harm has occurred. The AI system's use has directly led to violations of fundamental rights and significant harm, fitting the definition of an AI Incident.
Thumbnail Image

Deepfake, il Garante blocca Clothoff: stop immediato all'app che spoglia le persone

2025-10-03
Stato Quotidiano
Why's our monitor labelling this an incident or hazard?
The app uses generative AI to produce deepfake content that harms individuals by violating their privacy and dignity, especially affecting minors. The AI system's use has directly led to realized harm through unauthorized creation and dissemination of explicit fake images, which is a clear violation of human rights and data protection laws. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly caused harm to rights and individuals.
Thumbnail Image

Stop all'app che sveste le persone. Il Garante italiano contro Clothoff nella generazione di immagini di nudo - StartupItalia

2025-10-03
Startupitalia
Why's our monitor labelling this an incident or hazard?
The Clothoff app uses AI generative technology to produce non-consensual deepfake nude images, which constitutes a violation of human rights and privacy protections. The involvement of AI in generating these images and the resulting harm to individuals' dignity and rights qualifies this as an AI Incident. The article describes realized harm through the app's operation and the regulatory response to it, meeting the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Il Garante della privacy blocca Clothoff, l'app che spoglia le persone - Corriere Nazionale

2025-10-03
Corriere Nazionale
Why's our monitor labelling this an incident or hazard?
Clothoff is an AI generative system that produces deepfake nude images and videos without consent, including of minors, which constitutes a violation of fundamental rights and privacy. The harm is realized and ongoing, as indicated by social alarm and recent national incidents. The involvement of AI in generating these images and the direct harm to individuals' rights and dignity qualifies this as an AI Incident under the framework.
Thumbnail Image

GARANTE PRIVACY * WEB E DEEPFAKE: "STOP A "CLOTHOFF", L'APP CHE SPOGLIA LE PERSONE" - Agenzia giornalistica Opinione. Notizie da Italia - Mondo / Trentino Alto Adige

2025-10-03
Agenzia giornalistica Opinione
Why's our monitor labelling this an incident or hazard?
The app Clothoff uses generative AI to create deepfake nude images and videos without consent, including of minors, which constitutes a violation of fundamental rights and privacy. The involvement of AI in generating harmful content that affects individuals' dignity and rights is explicit. The regulatory action and social alarm confirm that harm is occurring. Therefore, this qualifies as an AI Incident due to realized violations of rights and harm to individuals and communities.
Thumbnail Image

Garante: stop a Clothoff, l'app che spoglia le persone - Villaggio Globale

2025-10-03
Villaggio Globale
Why's our monitor labelling this an incident or hazard?
The Clothoff app employs generative AI to produce deepfake nude images and videos without consent, directly violating privacy and fundamental rights, particularly affecting minors. The Italian data protection authority's urgent action highlights the realized harm and social alarm caused by this AI system. The AI system's use has directly led to violations of rights and harm to individuals, meeting the criteria for an AI Incident under the framework.
Thumbnail Image

Deepfake: Garante privacy blocca Clothoff, l'app che spoglia le persone

2025-10-03
NT+ Diritto
Why's our monitor labelling this an incident or hazard?
The app Clothoff uses AI generative technology to create deepfake nude images and videos, which constitutes an AI system. Its use has directly caused harm by violating fundamental rights such as privacy, dignity, and data protection, particularly for minors, fulfilling the criteria for an AI Incident under violations of human rights and fundamental rights. The blocking by the privacy authority is a response to this realized harm. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Dilaga il fenomeno del deepnude. E il Garante della privacy interviene con il blocco dell'app Clothoff | L'Espresso

2025-10-04
lespresso.it
Why's our monitor labelling this an incident or hazard?
The AI system (Clothoff) is explicitly described as using advanced AI and deep learning to generate manipulated images that realistically depict people without clothes. The harms are direct and significant: violation of privacy rights, lack of consent from depicted individuals, exposure of minors due to lack of age verification, and facilitation of digital violence and reputational damage. The regulatory intervention to block the app confirms the recognition of these harms. Therefore, this event qualifies as an AI Incident because the AI system's use has directly led to violations of fundamental rights and harm to individuals and communities.
Thumbnail Image

Il Garante blocca Clothoff: stop all'app che "spoglia" con l'IA

2025-10-05
lentepubblica
Why's our monitor labelling this an incident or hazard?
Clothoff is an AI system that manipulates images to create realistic nude deepfakes without consent or age verification, directly causing harm to individuals' privacy, dignity, and potentially violating human rights and data protection laws. The Italian Data Protection Authority's urgent blocking order and detailed findings confirm that harm has occurred and is ongoing, fulfilling the criteria for an AI Incident. The AI system's development and use have directly led to violations of fundamental rights and significant harm to individuals and communities, especially vulnerable groups like minors. Therefore, this event is classified as an AI Incident.
Thumbnail Image

Clothoff, stop all'App che "spoglia" le persone | Libero Quotidiano.it

2025-10-06
Quotidiano Libero
Why's our monitor labelling this an incident or hazard?
Clothoff is an AI system that creates realistic deepfake nude images without consent, which constitutes a violation of personal rights and privacy, a form of harm to individuals and communities. The article reports that minors are exposed to this harm, and the app lacks proper age verification and consent mechanisms. The data protection authority's urgent action to limit the app's data processing in Italy confirms that harm is occurring or imminent. Therefore, this event qualifies as an AI Incident due to realized harm caused by the AI system's use.
Thumbnail Image

Stop del Garante all'app che "spoglia" le persone, "massima attenzione ai minori" | estense.com Ferrara

2025-10-07
estense.com
Why's our monitor labelling this an incident or hazard?
The app Clothoff uses AI to create realistic manipulated images and videos (deepfakes) that violate personal rights and privacy, including generating explicit content involving minors. The Garante's urgent stop order indicates that harm has occurred or is ongoing, fulfilling the criteria for an AI Incident due to violations of personal rights and potential harm to individuals, especially minors. The AI system's use directly leads to these harms, making this an AI Incident rather than a hazard or complementary information.