AI Image Generator Stable Diffusion Used for Non-Consensual Pornography and Misinformation

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

The open-source AI image generator Stable Diffusion has been used to create and share non-consensual pornographic images, including fake celebrity nudes, and realistic fake photos, leading to trauma, privacy violations, and misinformation. Reddit banned several subreddits distributing such content, highlighting significant harms caused by the unfiltered use of this AI system.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves the use of an AI system (Stable Diffusion) to generate pornographic images, which were shared on Reddit subreddits. The platform's action to ban these subreddits is a response to the harm caused by the distribution of non-consensual intimate media, which constitutes a violation of rights and causes harm to communities. Since the harm (sharing of non-consensual intimate AI-generated content) has already occurred and led to platform intervention, this qualifies as an AI Incident under the framework.[AI generated]
AI principles
Privacy & data governanceRespect of human rightsSafetyAccountabilityTransparency & explainabilityRobustness & digital securityDemocracy & human autonomyHuman wellbeing

Industries
Media, social platforms, and marketing

Affected stakeholders
General public

Harm types
PsychologicalHuman or fundamental rightsReputationalPublic interest

Severity
AI incident

AI system task:
Content generation

In other databases

Articles about this incident or hazard

Thumbnail Image

Reddit bans users posting NSFW cyber-porn made with AI image generator - WSTale.com

2022-08-25
WSTale.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (Stable Diffusion) to generate pornographic images, which were shared on Reddit subreddits. The platform's action to ban these subreddits is a response to the harm caused by the distribution of non-consensual intimate media, which constitutes a violation of rights and causes harm to communities. Since the harm (sharing of non-consensual intimate AI-generated content) has already occurred and led to platform intervention, this qualifies as an AI Incident under the framework.
Thumbnail Image

This AI Tool Is Being Used To Make Freaky, Machine-Generated Porn

2022-08-24
VICE
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Stable Diffusion) used to generate images, including pornographic content. However, the article does not describe any direct or indirect harm resulting from this use, such as violations of rights, health harm, or other significant harms. The focus is on the potential misuse by users rather than a specific incident causing harm. Therefore, this is not an AI Incident or AI Hazard but rather general information about AI use and its societal implications, fitting best as Complementary Information.
Thumbnail Image

Reddit bans users posting NSFW cyber-porn made with AI image generator

2022-08-25
Metro
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (Stable Diffusion) to generate pornographic images, some of which are non-consensual or fake celebrity nudes, implicating violations of rights and causing harm to individuals' privacy and dignity. The banning of subreddits is a response to this harm. Since the AI system's use directly led to violations of rights and harm to communities, this qualifies as an AI Incident under the framework.
Thumbnail Image

Is that Trump photo real? Free AI tools come with risks - Taipei Times

2022-08-27
Taipei Times
Why's our monitor labelling this an incident or hazard?
The AI system (Stable Diffusion) is explicitly mentioned and described as capable of generating realistic fake images, including of public figures and sensitive events. The use of this AI system has directly led to the creation and dissemination of fake images, which constitutes harm to communities by spreading misinformation and eroding trust. The article provides examples of such images and discusses the potential for widespread misuse. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to significant harm (misinformation and potential social disruption).
Thumbnail Image

Deepfakes for all: Uncensored AI art model prompts ethics questions

2022-08-24
FocusTechnica
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Stable Diffusion) whose use has directly led to harms including violations of rights (non-consensual pornography, sexual exploitation), harm to individuals (trauma, blackmail threats), and harm to communities (spread of objectionable content). The article details realized harms and risks that have materialized, not just potential future harms. Therefore, this qualifies as an AI Incident under the OECD framework because the AI system's use has directly led to significant harms.
Thumbnail Image

Polémica por crear una inteligencia artificial que genera pornografía

2022-09-05
20 minutos
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as generating pornographic images, which fits the definition of an AI system. The article discusses ethical concerns and potential harms related to bias and societal impact but does not document any actual harm or incident caused by the AI system. There is no indication that the AI's use has directly or indirectly led to injury, rights violations, or other harms. Therefore, this event does not qualify as an AI Incident or AI Hazard. Instead, it provides contextual and ethical discussion about the AI system's implications, fitting the definition of Complementary Information.
Thumbnail Image

La IA está mejorando en la generación de pornografía. Puede que no estemos preparados para las consecuencias. – Tecno

2022-09-02
Es de Latino News
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems used to generate pornographic images, including deepfakes of real people without consent, which constitutes a violation of human rights and can cause harm to individuals and communities. The harms described are occurring, such as harassment, defamation, and economic impact on adult content creators. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to realized harms as defined in the framework.
Thumbnail Image

Crean una inteligencia artificial que produce porno y estalla la polémica

2022-09-06
MARCA
Why's our monitor labelling this an incident or hazard?
The event involves an AI system that generates pornographic content, which is a clear AI system involvement. There is no explicit mention of realized harm such as injury, rights violations, or other direct negative impacts. The concerns raised are about potential ethical and societal implications, including data sourcing and stereotypical representations, which could plausibly lead to harm in the future. Since no actual harm has been reported yet, and the article focuses on describing the AI system and the surrounding debate, this fits best as an AI Hazard, reflecting plausible future harm from the AI system's use.
Thumbnail Image

Polémica por web que genera pornografía con inteligencia artificial

2022-09-05
LaPatilla.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (an AI image generator) that produces pornographic content, which is a clear AI system involvement. However, the article does not report any realized harm such as injury, rights violations, or other direct consequences caused by the AI system. Instead, it discusses ethical concerns and potential negative impacts on adult content creators, which are plausible future harms but not confirmed incidents. Therefore, this qualifies as an AI Hazard due to the plausible risk of harm and ethical issues arising from the AI system's use.
Thumbnail Image

La inteligencia artificial ahora es capaz de generar pornografía, pero arrastra los mismos problemas de esa industria

2022-09-03
Hipertextual
Why's our monitor labelling this an incident or hazard?
The article clearly involves an AI system (image generation AI) used to create pornographic content, which is a new application with significant social implications. However, it does not report any actual harm or incident caused by the AI system at this time. The concerns raised are about potential future impacts, social patterns, and intellectual property challenges, which are plausible risks but not realized harms. Therefore, this event fits best as an AI Hazard, since the AI system's use could plausibly lead to harms such as rights violations or social harm in the future, but no direct or indirect harm has yet occurred as described in the article.
Thumbnail Image

La IA está mejorando en la generación de pornografía. Puede que no estemos preparados para las consecuencias. - La Neta Neta

2022-09-02
La Neta Neta
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI systems generating pornographic images, including non-consensual deepfake pornography, which causes harm to individuals (harassment, violation of consent) and communities (ethical and societal impacts). The AI system's use has directly led to these harms, fulfilling the criteria for an AI Incident. Although some harms are potential or emerging, the article references existing harms such as harassment and economic impact on sex workers. Hence, this is not merely a hazard or complementary information but an AI Incident.