AI Deepfake App Enables Nonconsensual Pornography, Prompting Outcry and Deplatforming

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A new AI-powered deepfake app allowed users to upload photos and generate highly realistic, nonconsensual pornographic videos, primarily targeting women. The app, discovered by researchers and reported by MIT Technology Review, caused significant psychological and reputational harm before being taken offline following public backlash. Similar unethical AI tools remain online.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions the use of AI to generate deepfake videos that scammers use to deceive and extort victims, leading to realized harms such as financial loss and blackmail. The AI system's use in creating synthetic content that directly causes harm to individuals fits the definition of an AI Incident, as the AI's role is pivotal in enabling these fraudulent activities and resulting harms. Therefore, this event is classified as an AI Incident.[AI generated]
AI principles
AccountabilityFairnessHuman wellbeingPrivacy & data governanceRespect of human rightsSafetyDemocracy & human autonomy

Industries
Media, social platforms, and marketing

Affected stakeholders
General public

Harm types
PsychologicalReputationalHuman or fundamental rights

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Scammers Are Using Deepfake Videos Now

2021-09-13
Slate Magazine
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to generate deepfake videos that scammers use to deceive and extort victims, leading to realized harms such as financial loss and blackmail. The AI system's use in creating synthetic content that directly causes harm to individuals fits the definition of an AI Incident, as the AI's role is pivotal in enabling these fraudulent activities and resulting harms. Therefore, this event is classified as an AI Incident.
Thumbnail Image

Horrifying New AI App Puts Women's Faces Into Porn Videos With A Simple Click

2021-09-14
IndiaTimes
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (deepfake technology) explicitly mentioned as generating manipulated pornographic videos using uploaded facial images. The use of this AI system has directly caused harm to individuals by enabling non-consensual sexual content creation, which is a violation of personal rights and causes psychological harm. This fits the definition of an AI Incident because the AI system's use has directly led to harm to persons and communities. The article also highlights the ethical and legal concerns, reinforcing the harm caused. Therefore, the classification is AI Incident.
Thumbnail Image

A horrifying new AI app swaps women into porn videos with a click

2021-09-13
MIT Technology Review
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as a deepfake face-swapping app used to create nonconsensual pornographic content. The harms include psychological trauma, reputational damage, job loss, and social consequences, which fall under injury to persons and violations of rights. The AI system's use is central to causing these harms, meeting the criteria for an AI Incident. The article does not merely warn of potential harm but documents ongoing, realized harm to victims, thus it is not an AI Hazard or Complementary Information.
Thumbnail Image

Pretty Much Anyone Can Make Deepfake Porn With This New App

2021-09-13
InsideHook
Why's our monitor labelling this an incident or hazard?
The app is an AI system that generates deepfake pornographic content by swapping faces in videos, which is a clear use of AI-generated synthetic media. The harms described include violations of privacy, reputational damage, emotional trauma, and potential blackmail, all of which constitute violations of human rights and harm to communities. The AI system's development and use have directly led to these harms. Therefore, this event qualifies as an AI Incident because the AI system's use has directly caused significant harm as defined in the framework.
Thumbnail Image

Scary new deepfake app can turn you into a pornstar without your consent

2021-09-15
The Citizen
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (deepfake technology) used to generate harmful content without consent, which constitutes a violation of rights and causes psychological and reputational harm. The harm is realized and ongoing, meeting the criteria for an AI Incident. The description details direct harm caused by the AI system's use, not just potential or future harm, so it is not merely a hazard or complementary information.
Thumbnail Image

The threat of deepfake

2021-09-15
The Manila times
Why's our monitor labelling this an incident or hazard?
The article primarily addresses the plausible future harms that deepfake AI technology could cause, such as misinformation, identity fraud, and political manipulation, which align with AI Hazards. There is no description of an actual incident where harm has occurred due to deepfake AI use. The discussion of legislation and societal responses further supports this as complementary context rather than an incident. Therefore, the event is best classified as an AI Hazard because it outlines credible risks of harm from deepfake AI technology that could plausibly lead to AI Incidents if unaddressed.
Thumbnail Image

Activists deplatformed a deepfake porn app, but others remain

2021-09-15
Input
Why's our monitor labelling this an incident or hazard?
The app is an AI system that uses deepfake technology to generate manipulated pornographic videos without consent, which constitutes a violation of human rights and causes harm to individuals. The article reports on the app's active use and user base, indicating that harm has occurred. The deplatforming action is a response to this harm but does not negate the incident itself. The presence of the AI system, its use, and the resulting harm align with the definition of an AI Incident.
Thumbnail Image

A horrifying new AI app swaps women into porn videos with a click (Karen Hao/Technology Review)

2021-09-13
Tech Investor News
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly described as using deepfake technology to swap faces into porn videos, which is a clear use of AI for generating synthetic content. The harm is realized as it involves non-consensual pornography, violating privacy and potentially causing psychological and reputational harm to the individuals depicted. This fits the definition of an AI Incident as the AI system's use has directly led to harm to persons and communities.
Thumbnail Image

MIT: New AI App 'Deepfakes' Your Profile Pic Into A Porn Video - DailyAlts -

2021-09-15
DailyAlts
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as deepfake AI technology used to generate non-consensual pornographic videos by face-swapping. The harm is direct and realized, including psychological trauma and reputational damage, which aligns with violations of human rights and harm to communities. The AI system's use is central to the harm, fulfilling the criteria for an AI Incident rather than a hazard or complementary information.