Exposed AI Deepfake Database Raises Human Rights Concerns

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Security researcher Jeremiah Fowler exposed an unsecured database from South Korea’s GenNomis, revealing over 90,000 AI-generated explicit images, including deepfakes of celebrities and depictions of minors. The incident highlights severe privacy breaches and the potential weaponization of AI in generating nonconsensual harmful content.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event describes an AI system (an AI 'nudify' app) generating explicit deepfake images, including non-consensual use of real individuals' faces and celebrities portrayed as children. The exposure of this database constitutes a direct violation of privacy and ethical norms, which falls under violations of human rights and breaches of applicable laws protecting personal rights. The AI system's use has directly led to realized harm, qualifying this as an AI Incident rather than a hazard or complementary information. The harm is clear and ongoing, involving serious privacy and legal concerns.[AI generated]
AI principles
Privacy & data governanceRespect of human rightsRobustness & digital securitySafetyAccountabilityTransparency & explainabilityHuman wellbeing

Industries
Media, social platforms, and marketingDigital securityConsumer services

Affected stakeholders
ChildrenOther

Harm types
Human or fundamental rightsPsychologicalReputational

Severity
AI incident

Business function:
Other

AI system task:
Content generation

In other databases

Articles about this incident or hazard

Thumbnail Image

Exposed deepfake database shows how users manipulated celeb pics

2025-03-31
The Daily Dot
Why's our monitor labelling this an incident or hazard?
The event describes an AI system (an AI 'nudify' app) generating explicit deepfake images, including non-consensual use of real individuals' faces and celebrities portrayed as children. The exposure of this database constitutes a direct violation of privacy and ethical norms, which falls under violations of human rights and breaches of applicable laws protecting personal rights. The AI system's use has directly led to realized harm, qualifying this as an AI Incident rather than a hazard or complementary information. The harm is clear and ongoing, involving serious privacy and legal concerns.
Thumbnail Image

GenAI website goes dark after explicit fakes exposed

2025-04-01
theregister.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (GenNomis) that generates explicit images, including illegal child sexual abuse material and non-consensual deepfake pornography, which are serious violations of law and human rights. The AI system's outputs were stored in an unsecured cloud bucket, exposing tens of thousands of such images publicly. This exposure and the generation of illegal content directly cause harm to individuals depicted and society at large. The article documents realized harm, not just potential harm, and the AI system's role is pivotal in creating and exposing this content. Hence, it meets the criteria for an AI Incident.
Thumbnail Image

AI-Generated Deepfake Database Breach Exposes Thousands of Explicit Images - ID Tech

2025-04-03
idtechwire.com
Why's our monitor labelling this an incident or hazard?
The event describes a breach of an AI system's database containing AI-generated explicit images, including non-consensual deepfakes of minors and celebrities, which is a clear violation of privacy and ethical norms, constituting harm to individuals and communities. The AI system's use (image generation and face swapping) directly led to the creation and exposure of harmful content. The breach and exposure of this data caused realized harm, not just potential harm. Hence, it meets the criteria for an AI Incident due to violations of rights and harm to communities.
Thumbnail Image

An AI Image Generator's Exposed Database Reveals What People Really Used It For

2025-03-31
WIRED
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used to generate harmful and illegal content, including AI-generated CSAM and nonconsensual sexual images. The exposure of this data and the existence of such content directly harm individuals' rights and well-being, fulfilling the criteria for an AI Incident. The AI system's use in generating and distributing this content is central to the harm described, and the incident involves realized harm, not just potential harm.
Thumbnail Image

GenNomis database exposes 100K sensitive records of AI images - InfotechLead

2025-03-31
InfotechLead
Why's our monitor labelling this an incident or hazard?
The event describes a data breach of an AI system's database containing AI-generated explicit and non-consensual deepfake images, including of minors and celebrities, which directly causes harm through privacy violations, potential extortion, and reputational damage. The AI system's development and use (face-swapping and AI-generated explicit content) are central to the harm. The exposure of this sensitive data without protection and the presence of illegal content constitute a clear violation of rights and ethical norms. The incident has already occurred and caused harm, meeting the criteria for an AI Incident rather than a hazard or complementary information.