AI Image Generators Reproduce Non-Consensual and Copyrighted Content, Violating Privacy and IP Rights

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Researchers found that AI image generators like Stable Diffusion and Imagen can produce non-consensual pornographic images, exact replicas of real people's photos, and copyrighted works, including child exploitation material. These outputs violate privacy, consent, and intellectual property rights, causing direct harm to individuals and artists.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves AI systems (diffusion models) that have been shown to memorize and reproduce copyrighted images, which constitutes a violation of intellectual property rights, a form of harm under the AI Incident definition (c). The reproduction of copyrighted photos by the AI system is a direct consequence of its development and use, leading to realized harm to copyright holders. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.[AI generated]
AI principles
Privacy & data governanceRespect of human rightsSafetyRobustness & digital securityTransparency & explainabilityAccountabilityHuman wellbeing

Industries
Media, social platforms, and marketingArts, entertainment, and recreation

Affected stakeholders
General publicChildrenOther

Harm types
Human or fundamental rightsPsychologicalEconomic/PropertyReputational

Severity
AI incident

Business function:
Other

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

AI Image Generators Can Exactly Replicate Copyrighted Photos

2023-02-02
PetaPixel
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (diffusion models) that have been shown to memorize and reproduce copyrighted images, which constitutes a violation of intellectual property rights, a form of harm under the AI Incident definition (c). The reproduction of copyrighted photos by the AI system is a direct consequence of its development and use, leading to realized harm to copyright holders. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Testing shows AI-based image generation systems can sometimes generate copies of trainer data

2023-02-02
Tech Xplore
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (image generation models like Stable Diffusion, Imagen, and Dall-E 2) and their use leading to the unauthorized reproduction of copyrighted images, which is a breach of intellectual property rights. The harm (copyright violation) has already occurred as the systems have generated copies of protected images. Therefore, this qualifies as an AI Incident under the definition of violations of intellectual property rights caused by AI system use.
Thumbnail Image

AI models spit out photos of real people and copyrighted images (Melissa Heikkilä/Technology Review)

2023-02-03
Tech Investor News
Why's our monitor labelling this an incident or hazard?
The AI systems (image generation models) are shown to produce outputs that directly replicate real individuals' photos and copyrighted works, which constitutes a violation of privacy and intellectual property rights. This is a direct harm caused by the AI systems' outputs, fitting the definition of an AI Incident due to violations of human rights (privacy) and intellectual property rights.
Thumbnail Image

Researchers Prove AI Art Generators Can Simply Copy Existing Images

2023-02-01
Gizmodo
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI systems (image generators like Stable Diffusion and Imagen) and their capability to memorize and reproduce copyrighted or sensitive images, which directly implicates violations of intellectual property rights and privacy concerns. These harms have already occurred or are occurring as AI-generated images replicate copyrighted content and potentially sensitive personal data. The research highlights the direct link between AI system behavior and these harms, fulfilling the criteria for an AI Incident rather than a mere hazard or complementary information. The presence of actual reproduced copyrighted images and the risk of privacy breaches confirm realized harm.
Thumbnail Image

Paper: Stable Diffusion "memorizes" some images, sparking privacy concerns

2023-02-01
Ars Technica
Why's our monitor labelling this an incident or hazard?
The article discusses the discovery of approximate memorization of training images by AI diffusion models, which could potentially lead to privacy violations or copyright infringement if such memorization were exploited. However, the current evidence shows only a very low rate of memorization and no direct harm has occurred yet. The researchers highlight the importance of addressing these vulnerabilities proactively to prevent future incidents. Therefore, this event represents a plausible risk of harm stemming from AI system development and use, but not an actualized harm. It fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

AI models spit out photos of real people and copyrighted images

2023-02-03
MIT Technology Review
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI models like Stable Diffusion and Imagen can generate exact copies of real people's photos and copyrighted works, which constitutes a violation of privacy and intellectual property rights. These harms have already occurred as the AI systems have been used to produce these images. Therefore, this qualifies as an AI Incident due to the direct link between the AI systems' outputs and the harm to individuals' privacy and artists' rights.
Thumbnail Image

AI Porn: Creepier Than You Think - Philadelphia Weekly

2023-02-02
Philadelphia Weekly
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems (generative AI models like Stable Diffusion) being used to create harmful content such as non-consensual pornographic images, fake celebrity nudes, and child exploitation imagery. These uses have directly led to harms including violations of consent, sexual violence, and potential psychological harm to individuals and communities. The AI system's role is pivotal as it enables the creation of such content easily and at scale. Therefore, this qualifies as an AI Incident under the framework, as the harms are realized and directly linked to the AI system's use and misuse.