AI Deepfake Tool Generates Non-Consensual Nude Images, Causing Widespread Harm

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A website using AI-powered deepfake technology has created and distributed realistic non-consensual nude images of thousands of women, leading to significant privacy violations, reputational harm, and mental distress. The tool has attracted millions of users, highlighting the dangers of AI misuse in generating synthetic, harmful content.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions the use of an AI system ('state of the art AI model' trained on millions of data points) to generate deepfake nude images without consent, targeting thousands of women. This use has directly caused harm by violating individuals' rights to privacy and dignity, and by contributing to violence against women. The harm is realized and ongoing, with millions of views and active dissemination. The AI system's role is pivotal as it enables the creation of realistic fake nudes from single images, which would be otherwise impossible or extremely difficult. Hence, this event meets the criteria for an AI Incident.[AI generated]
AI principles
Privacy & data governanceRespect of human rightsSafetyAccountability

Industries
Media, social platforms, and marketing

Affected stakeholders
Women

Harm types
Human or fundamental rightsReputationalPsychological

Severity
AI incident

Business function:
Other

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Creepy deepfake site digitally undresses thousands of everyday women

2021-08-11
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system ('state of the art AI model' trained on millions of data points) to generate deepfake nude images without consent, targeting thousands of women. This use has directly caused harm by violating individuals' rights to privacy and dignity, and by contributing to violence against women. The harm is realized and ongoing, with millions of views and active dissemination. The AI system's role is pivotal as it enables the creation of realistic fake nudes from single images, which would be otherwise impossible or extremely difficult. Hence, this event meets the criteria for an AI Incident.
Thumbnail Image

A Powerful New Deepfake Tool Has Digitally Undressed Thousands Of Women

2021-08-11
HuffPost
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system that generates realistic deepfake nude images without consent, which is used maliciously to harm women by violating their privacy and dignity. The harms include violations of human rights and harm to communities through sexual humiliation and exploitation. The AI system's use directly leads to these harms, fulfilling the criteria for an AI Incident. The article also discusses the societal and platform responses, but the primary focus is on the realized harm caused by the AI system's outputs, not just potential or complementary information.
Thumbnail Image

Sick app 'UNDRESSES' thousands of real women every day using deepfake technology

2021-08-12
The Sun
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (deepfake technology) that generates fake nude images of real women, which is a direct misuse of AI leading to violations of privacy and human rights. The harm is realized as the images are being produced and distributed, causing significant personal and societal harm. The involvement of AI in creating these images is central to the incident. The article also discusses responses and concerns but the primary focus is on the harm caused by the AI system's use, fitting the definition of an AI Incident.
Thumbnail Image

Get ready for Zoom-based deepfake phishing attacks, expert warns

2021-08-11
Tom's Guide
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (deepfake technology, AI-generated synthetic media) used in phishing scams that have already caused harm (financial loss from gift card scams and wire transfer fraud). These are AI Incidents because the AI system's use directly led to harm. However, the article primarily focuses on expert warnings about future, more sophisticated attacks and mitigation strategies rather than detailing a new, specific incident or hazard event. Since the article mainly provides context, expert analysis, and potential future risks along with responses, it fits best as Complementary Information. It does not solely describe a new AI Incident or AI Hazard but enhances understanding of the evolving AI threat landscape and societal responses.
Thumbnail Image

What you need to know about spotting deepfakes

2021-08-12
VentureBeat
Why's our monitor labelling this an incident or hazard?
The article explicitly references AI systems generating synthetic media (deepfakes) and their use in harmful activities such as spear phishing and fraud, which are AI Incidents when they occur. However, the article itself does not describe a new or specific incident or hazard event but rather summarizes known incidents, expert opinions, and detection efforts. It also discusses societal implications and mitigation strategies, which aligns with the definition of Complementary Information. Hence, the classification is Complementary Information rather than AI Incident or AI Hazard.
Thumbnail Image

Unravelling Deep Model Artefacts for Deepfake Videos Detection

2021-08-13
Analytics India Magazine
Why's our monitor labelling this an incident or hazard?
The event involves the development and use of an AI system (SimpleCNN) for detecting deepfake videos, which is clearly an AI system as it uses convolutional neural networks for classification tasks. However, the article does not describe any realized harm or incident caused by the AI system. Instead, it discusses the potential benefits of the system in mitigating misinformation, which is a societal harm. Since no harm has occurred due to the AI system and the article is primarily about the research and development of the AI model and its potential applications, this fits the category of Complementary Information. It provides context and updates on AI developments relevant to combating misinformation but does not describe an AI Incident or AI Hazard.
Thumbnail Image

How deepfakes are impacting our vision of reality

2021-08-13
SWI swissinfo.ch
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems (deepfake technology) being used to create manipulated videos that have already caused harm by spreading disinformation, influencing political events, and enabling financial scams. These harms fall under violations of rights and harm to communities. The involvement of AI is clear and central, and the harms are realized, not merely potential. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Deepfake Fraud: Security Threats Behind Artificial Faces - Panda Security Mediacenter

2021-08-10
pandasecurity.com
Why's our monitor labelling this an incident or hazard?
The article explicitly details how AI systems (deepfake generation via neural networks) have been used maliciously to cause financial fraud and deception, with concrete examples of harm already occurring. The harms are direct and significant, including financial loss and deception of individuals and organizations. The AI system's role is pivotal in enabling these harms. Although the article also discusses potential future risks and mitigation efforts, the presence of actual realized harm from AI-generated deepfakes classifies this as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

A Powerful New Deepfake Tool Has Digitally Undressed Thousands Of Women

2021-08-11
HuffPost UK
Why's our monitor labelling this an incident or hazard?
The AI system involved is a deepfake tool that manipulates images to create nonconsensual nude images of women, which directly harms individuals by violating their privacy and dignity, leading to significant personal and social consequences. The article details the active use and spread of this AI-generated harmful content, the failure of platforms to fully prevent its dissemination, and the resulting harms to victims, including job loss, relationship damage, and mental health impacts. This meets the criteria for an AI Incident as the AI system's use has directly led to violations of human rights and harm to communities.
Thumbnail Image

Hyper-realistic deepfake tool digitally disrobes unsuspecting women, creating shareable non-consensual 'nudes'

2021-08-11
indy100.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of a deepfake AI system that digitally disrobes women without their consent, creating realistic fake nudes. This use directly causes harm by violating individuals' rights and enabling non-consensual pornography, a form of violence against women. The harm is ongoing and widespread, with millions of visitors and sharing on social media. The AI system's role is pivotal in generating these images, making this a clear AI Incident under the framework definitions.
Thumbnail Image

Website uses deepfake tech to undress thousands of everyday women and experts can't do anything - Daily Mail - Business Telegraph

2021-08-11
Business Telegraph
Why's our monitor labelling this an incident or hazard?
The website employs an AI system (deep learning-based deepfake technology) to generate realistic fake nude images without consent, directly causing harm to the individuals depicted and violating their rights. This fits the definition of an AI Incident because the AI system's use has directly led to violations of human rights and harm to communities. The harm is realized and ongoing, as evidenced by millions of hits and widespread dissemination. The article also discusses the legal and societal challenges in addressing this harm, reinforcing the significance of the AI system's role in the incident.