AI Platform Civitai Profits from Nonconsensual Deepfake Porn via Bounty System

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Civitai, a major AI model-sharing platform backed by Andreessen Horowitz, enables and profits from the creation of nonconsensual deepfake sexual images of real people. Its 'bounties' feature incentivizes users to generate targeted deepfake models, leading to privacy violations and harm to individuals, especially women and private citizens.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves an AI system (Civitai) that facilitates the generation of nonconsensual sexual images, which constitutes a violation of human rights and causes harm to individuals (harm to health and dignity). The harm is ongoing and directly linked to the AI system's use and development. The platform profits from this activity and has features that encourage further harmful AI model creation. Therefore, this qualifies as an AI Incident due to realized harm caused by the AI system's use and malfunction in content moderation.[AI generated]
AI principles
AccountabilityFairnessHuman wellbeingPrivacy & data governanceRespect of human rightsSafety

Industries
Media, social platforms, and marketing

Affected stakeholders
WomenGeneral public

Harm types
PsychologicalReputationalHuman or fundamental rights

Severity
AI incident

Business function:
Other

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Andreessen Horowitz Invests in Civitai, Which Profits From Nonconsensual AI Porn

2023-11-14
404 Media
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Civitai) that facilitates the generation of nonconsensual sexual images, which constitutes a violation of human rights and causes harm to individuals (harm to health and dignity). The harm is ongoing and directly linked to the AI system's use and development. The platform profits from this activity and has features that encourage further harmful AI model creation. Therefore, this qualifies as an AI Incident due to realized harm caused by the AI system's use and malfunction in content moderation.
Thumbnail Image

Popular AI platform introduces rewards system to encourage deepfakes of real people

2023-11-14
Yahoo7 Finance
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (generative AI models for deepfakes) explicitly used to create realistic images of real people, often without their consent, including sexualized content. This use directly leads to violations of human rights, specifically privacy and potentially labor and intellectual property rights, as well as harm to individuals and communities. The platform incentivizes such harmful content creation, making the AI system's role pivotal in causing these harms. Therefore, this qualifies as an AI Incident under the framework.
Thumbnail Image

Civitai introduces bounties to encourage deep fakes

2023-11-14
The Express Tribune
Why's our monitor labelling this an incident or hazard?
The platform's active encouragement and incentivization of creating realistic AI deep fakes of real people, especially public figures, involves the use of AI systems for generating synthetic content that can mislead or harm individuals. While the article does not report a specific incident of harm occurring, the nature of deep fakes and their potential for misuse (e.g., misinformation, defamation, privacy violations) makes it plausible that such harms could arise. The event thus fits the definition of an AI Hazard, as the development and promotion of these AI-generated deep fakes could plausibly lead to violations of rights and harm to communities in the future.
Thumbnail Image

Seed investment: Andreessen Horowitz backing AI startup linked to nonconsensual porn of real people | Boing Boing

2023-11-14
Boing Boing
Why's our monitor labelling this an incident or hazard?
The AI system (Civitai) is explicitly described as generating nonconsensual pornographic images of real people, which is a clear violation of human rights and privacy, fulfilling the criteria for harm under (c). The platform profits from this use and facilitates targeted creation of such content, indicating direct involvement of the AI system's use in causing harm. The investment by Andreessen Horowitz is linked to this harmful AI system, confirming the development and use context. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Popular AI platform paying users for best deepfakes

2023-11-14
MyBroadband
Why's our monitor labelling this an incident or hazard?
The platform uses AI models to generate nonconsensual images, which constitutes a violation of human rights and privacy, a breach of applicable laws protecting fundamental rights. The incentivization of such models through bounties directly leads to harm to individuals' dignity and reputation, qualifying as an AI Incident. The involvement of AI in generating these images and the resulting harm to individuals is explicit and direct.
Thumbnail Image

AI Marketplace Sparks Controversy with Deepfake 'Bounties' on Celebrities, Private Individuals

2023-11-14
Tech Times
Why's our monitor labelling this an incident or hazard?
The marketplace uses AI models to generate deepfake images, which are realistic and nonconsensual, targeting real individuals. The platform incentivizes this behavior through bounties, leading to actual harm in terms of privacy violations and potential mental health impacts. The AI system's use is central to the harm, as it enables the creation and dissemination of these images. The harm is occurring, not just potential, and involves violations of rights and harm to individuals and communities. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

AI Platform Has a "Bounties" System for Creating Deepfakes of Regular People

2023-11-14
Futurism
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly (Civitai's AI model marketplace) that is used to create harmful deepfake content without consent, which constitutes a violation of human rights and causes harm to individuals and communities. The monetization and incentivization of such content production further exacerbate the harm. The harm is ongoing and realized, not merely potential, fulfilling the criteria for an AI Incident. The platform's failure to adequately enforce rules or protect victims also contributes to the harm. Hence, the classification as AI Incident is appropriate.