AI-Generated Deepfake Pornography Causes Widespread Harm to Women and Children

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

AI systems are increasingly used to create non-consensual deepfake pornographic images and videos, primarily targeting women and children. These manipulations result in emotional distress, harassment, and reputational damage, while also enabling sexual blackmail. Experts warn that legal frameworks are unprepared for the scale and impact of this AI-driven abuse.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves AI systems explicitly (deepfake algorithms) used to produce manipulated sexual images and videos without consent, causing direct harm to individuals (mostly women) through exploitation, harassment, and extortion. The harms include violations of rights, emotional and reputational damage, and sexual exploitation, which fall under the definitions of AI Incident. The FBI warning confirms that these harms are ongoing and realized, not just potential. Hence, the event meets the criteria for an AI Incident rather than a hazard or complementary information.[AI generated]
AI principles
AccountabilityFairnessHuman wellbeingPrivacy & data governanceRespect of human rightsRobustness & digital securitySafetyTransparency & explainability

Industries
Media, social platforms, and marketingDigital securityGovernment, security, and defence

Affected stakeholders
WomenChildren

Harm types
PsychologicalReputationalHuman or fundamental rightsEconomic/Property

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

A nők a mesterséges intelligencia elsődleges áldozatai

2023-07-26
Index.hu
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (deepfake algorithms) used to produce manipulated sexual images and videos without consent, causing direct harm to individuals (mostly women) through exploitation, harassment, and extortion. The harms include violations of rights, emotional and reputational damage, and sexual exploitation, which fall under the definitions of AI Incident. The FBI warning confirms that these harms are ongoing and realized, not just potential. Hence, the event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Szakértők: A nők a mesterséges intelligencia elsődleges áldozatai

2023-07-25
hirado.hu
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI-generated deepfake pornography used without consent, causing real harm such as sexual exploitation, harassment, and blackmail. The harms include violations of rights and significant emotional and reputational damage to individuals, especially women. The AI systems' use in creating and disseminating these manipulated images and videos is central to the harm described. Therefore, this event qualifies as an AI Incident due to direct harm caused by AI system use.
Thumbnail Image

Súlyos gondokat okoznak a mesterséges intelligenciával létrehozott pornográf képek

2023-07-25
Magyar Nemzet
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating manipulated pornographic content (deepfakes) that directly harm individuals by violating their rights and causing psychological and social harm. The AI's use in creating non-consensual explicit images and messages is a clear case of AI misuse leading to realized harm, fitting the definition of an AI Incident under violations of human rights and harm to communities.
Thumbnail Image

BAMA - A nők lettek a mesterséges intelligencia első áldozatai

2023-07-25
BAMA
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating manipulated images and videos (deepfakes) that have directly led to harm such as sexual harassment, exploitation, and violation of personal rights. The harms are realized and ongoing, including emotional distress and reputational damage. Therefore, this qualifies as an AI Incident because the AI system's use has directly caused significant harm to individuals and communities, specifically women targeted by these deepfake contents.
Thumbnail Image

A nők a mesterséges intelligencia elsődleges áldozatai

2023-07-25
Paraméter
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating manipulated pornographic images and videos (deepfakes) that are used without consent, leading to direct harm to the health and well-being of individuals (emotional distress, harassment) and violations of rights (privacy, dignity). The article provides concrete examples and expert commentary on the harms caused, fulfilling the criteria for an AI Incident.
Thumbnail Image

Nagyon nagy veszélyben vannak a nők, az FBI is megszólalt

2023-07-25
Nap Híre
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating deepfake pornographic content that directly harms women by violating their rights and enabling sexual blackmail, which is a clear violation of human rights and harm to communities. The harm is occurring, not just potential, and the AI system's use is central to the incident. Hence, it meets the criteria for an AI Incident.
Thumbnail Image

Az MI által generált pornográf tartalmak áldozatai a nők és a gyerekek

2023-07-25
euronews
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating deepfake pornographic content without consent, directly causing harm to individuals (exploitation, harassment, violation of rights), which fits the definition of an AI Incident. The article also covers governance responses, but the primary focus is on the harms caused by AI-generated content. Therefore, the classification is AI Incident.
Thumbnail Image

Χωρίς όρια η κακοποίηση των γυναικών με τροποποιημένο περιεχόμενο τεχνητής νοημοσύνης

2023-07-25
Digital Life!
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly mentioned as generating manipulated images and videos (deepfakes) that have caused real harm to individuals, especially women, through sexual harassment, reputational damage, and exploitation. These harms fall under violations of human rights and harm to communities. The article reports on actual occurrences and impacts, not just potential risks or general commentary, thus qualifying as an AI Incident.
Thumbnail Image

Τεχνητή νοημοσύνη: Mάστιγα οι εφαρμογές φωτογραφιών που "γδύνουν ψηφιακά" γυναίκες | LiFO

2023-07-24
LiFO
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (deepfake and generative AI photo applications) whose use has directly led to significant harms: non-consensual creation and distribution of pornographic images, sexual extortion, and psychological harm to women and minors. These harms constitute violations of human rights and harm to communities. The article reports ongoing incidents and real consequences, not just potential risks. Therefore, this qualifies as an AI Incident under the OECD framework.
Thumbnail Image

Όταν η τεχνητή νοημοσύνη μπορεί να "γδύσει" οποιαδήποτε γυναίκα

2023-07-25
Η Ναυτεμπορική
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (deepfake generation, generative AI models like Stable Diffusion) to produce harmful content that directly leads to violations of human rights, specifically privacy and dignity, and causes harm to individuals and communities. The harms described (reputational damage, sexual harassment, extortion) are realized and ongoing, meeting the criteria for an AI Incident. The article also discusses societal and legal responses, but the primary focus is on the harm caused by AI-generated non-consensual imagery, which is a direct AI Incident.
Thumbnail Image

Μάστιγα οι εφαρμογές τεχνητής νοημοσύνης που "γδύνουν ψηφιακά" γυναίκες

2023-07-24
in2life.gr
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems used to create non-consensual sexual deepfake images and videos, which directly harm women and others by violating their rights and causing psychological and social damage. This fits the definition of an AI Incident because the AI system's use has directly led to violations of human rights and harm to individuals and communities. The presence of AI is clear, the harm is realized, and the event is not merely a warning or potential risk but an ongoing issue with documented cases and societal impact.