AI-Generated Deepfake Porn Causes Harm and Prompts Legislative Action

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

AI-generated deepfake pornography has caused significant psychological and reputational harm to individuals, including Rep. Alexandria Ocasio-Cortez and actress Uldouz Wallace, by creating and distributing non-consensual explicit images. In response, lawmakers, led by Ocasio-Cortez, are proposing federal legislation to penalize perpetrators and protect victims.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves AI systems generating deepfake images that have been used to create nonconsensual pornographic content of a public figure, causing real psychological and reputational harm. This fits the definition of an AI Incident because the AI system's use has directly led to violations of rights and harm to the individual and community. The article also highlights ongoing harm and responses, but the primary focus is on the realized harm caused by the AI-generated deepfakes.[AI generated]
AI principles
AccountabilityPrivacy & data governanceRespect of human rightsRobustness & digital securitySafetyHuman wellbeingTransparency & explainabilityDemocracy & human autonomy

Industries
Media, social platforms, and marketingDigital securityGovernment, security, and defenceArts, entertainment, and recreation

Affected stakeholders
Women

Harm types
PsychologicalReputationalHuman or fundamental rightsPublic interest

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

AOC reveals the horror of seeing a deepfake porn image of herself

2024-04-08
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating deepfake images that have been used to create nonconsensual pornographic content of a public figure, causing real psychological and reputational harm. This fits the definition of an AI Incident because the AI system's use has directly led to violations of rights and harm to the individual and community. The article also highlights ongoing harm and responses, but the primary focus is on the realized harm caused by the AI-generated deepfakes.
Thumbnail Image

Seeing fake porn of myself 'shocking', says AOC as she launches AI bill

2024-04-09
Yahoo News UK
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated deepfake pornography depicting Alexandria Ocasio-Cortez without consent, causing psychological harm and trauma, which fits the definition of harm to a person (a). The AI system's use in creating these images is central to the incident. The legislative effort to criminalize such acts is a response to this harm. Since the harm is realized and directly linked to the AI system's misuse, this is classified as an AI Incident.
Thumbnail Image

Ocasio-Cortez 'shocked' by porn deepfake in her likeness

2024-04-09
Aol
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to generate deepfake pornographic images without consent, directly causing harm to the individual depicted, including psychological trauma and violation of personal rights. This fits the definition of an AI Incident because the AI system's use has directly led to harm to a person (psychological and reputational harm) and violations of rights. The article also mentions ongoing legislative responses, but the primary focus is on the realized harm from the AI-generated deepfake content.
Thumbnail Image

AOC opens up about trauma of seeing deepfake AI porn of herself:...

2024-04-09
New York Post
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated deepfake pornographic images of Rep. Alexandria Ocasio-Cortez, which caused her psychological harm. The AI system's use in creating these nonconsensual images constitutes a violation of personal rights and causes harm to the individual, fitting the definition of an AI Incident. The legislative response further underscores the recognition of this harm.
Thumbnail Image

Fake Photos, Real Harm: AOC and the Fight Against AI Porn

2024-04-08
Rolling Stone
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI systems (generative AI and deepfake technology) being used to create nonconsensual sexually explicit images and videos, which have caused real psychological and social harm to victims, including trauma, harassment, and threats to safety. This fits the definition of an AI Incident because the AI system's use has directly led to harm to persons and communities. The article also highlights the scale and severity of this harm and ongoing legislative responses, confirming the materialization of harm rather than just potential risk.
Thumbnail Image

AOC likens AI-generated deepfake to real rape after seeing graphic faux image of herself

2024-04-09
Conservative News Today
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to generate deepfake images that have directly caused psychological harm to Rep. Alexandria Ocasio-Cortez, a clear example of harm to a person. The deepfake technology is an AI system used maliciously to produce nonconsensual pornography, which violates personal rights and causes trauma. The article also mentions legislative responses to this harm, but the primary focus is on the realized harm caused by the AI-generated content. Therefore, this qualifies as an AI Incident.
Thumbnail Image

How Alexandria Ocasio-Cortez and other women politicians are becoming easy targets of deepfake porn

2024-04-10
Firstpost
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (deep learning-based deepfake generation) to create harmful content that directly causes psychological and emotional harm to individuals, particularly women politicians and celebrities. This harm falls under the category of harm to persons (a) and harm to communities (d) due to the widespread targeting and impact. Since the harm is realized and ongoing, and the AI system's use is central to the harm, this qualifies as an AI Incident.
Thumbnail Image

AOC Describes 'Trauma' of Seeing Deepfake Porn of Herself

2024-04-10
The Cut
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated deepfake pornography, which is an AI system generating harmful content. The harm is realized as psychological trauma and violation of personal rights, fitting the definition of an AI Incident under violations of human rights and harm to communities. The event involves the use of AI systems to create nonconsensual explicit images, directly causing harm. The legislative effort mentioned is a response to this harm, but the primary event is the AI Incident itself.
Thumbnail Image

AOC Sounds Alarm Over Deepfakes After She's Targeted with AI Porn: 'People Are Going to Kill Themselves Over This'

2024-04-08
Mediaite
Why's our monitor labelling this an incident or hazard?
The event involves AI-generated deepfake images, which are created by AI systems. The use of these images has directly caused harm to the individual targeted, including psychological trauma and violation of personal rights. The article highlights the nonconsensual nature of such AI-generated content and the societal harm it causes at scale, including threats to bodily autonomy and dignity. Therefore, this qualifies as an AI Incident due to realized harm caused by AI-generated content. The legislative efforts mentioned are responses to this incident, but the primary focus is on the harm caused by the AI system's use.
Thumbnail Image

Ocasio-Cortez proposes anti-deepfake porn legislation

2024-04-09
abc15 Arizona
Why's our monitor labelling this an incident or hazard?
The article discusses the problem of non-consensual deepfake pornography created using AI, which is a recognized harm (violation of rights and harm to individuals). However, the article's main focus is on the legislative response (the DEFIANCE Act) proposed to address this harm. It does not report a new specific AI Incident (a particular event of harm caused by AI) or an AI Hazard (a new potential risk). Instead, it provides complementary information about societal and governance responses to an ongoing AI-related harm issue.
Thumbnail Image

States race to restrict deepfake porn as it becomes easier to create

2024-04-11
Georgia Public Broadcasting
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems, specifically generative AI used to create deepfake pornographic content. The harm is realized and ongoing, including violations of privacy, consent, and potentially human rights, as well as psychological and reputational harm to victims. The article documents actual incidents of harm (AI Incident) and legislative efforts to combat these harms. Therefore, this qualifies as an AI Incident because the AI-generated deepfake pornography has directly led to harm to individuals and communities.
Thumbnail Image

AOC facing trauma over 'deepfake' porn depicting her: 'Digitizing violent humiliation'

2024-04-09
WSBT
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI-generated deepfake pornography that has caused direct psychological harm to Rep. Ocasio-Cortez, a person of color and a public figure. The harm is not hypothetical but realized, as she experiences resurfaced trauma and distress. The AI system's role in generating the manipulated images is central to the harm. This meets the definition of an AI Incident because the AI system's use has directly led to harm to a person, fulfilling criterion (a) under AI Incident. The event is not merely a potential risk or a complementary update but a clear case of harm caused by AI misuse.
Thumbnail Image

Ocasio-Cortez proposes anti-deepfake porn legislation

2024-04-09
Scripps News
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems in the form of deepfake technology used to create non-consensual pornographic content, which constitutes a violation of rights (harm category c). However, the article's main focus is on the introduction of legislation (the DEFIANCE Act) to address this harm and hold perpetrators accountable. There is no new specific incident or hazard event described; instead, the article details a governance response to an ongoing AI-related harm. Therefore, this is best classified as Complementary Information, as it provides context and societal response to AI harms rather than reporting a new incident or hazard.
Thumbnail Image

States race to restrict deepfake porn as it becomes easier to create

2024-04-10
NJTODAY.NET
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI-generated deepfake pornography that has been used to create and distribute non-consensual sexual images, causing real harm to victims such as Uldouz Wallace and minors at Westfield High School. The involvement of AI systems in creating these manipulated images is clear, and the harms include violations of privacy, consent, and potentially other human rights. The article also references legislative efforts to combat these harms, but the primary focus is on the realized harm caused by AI deepfake technology. Hence, this is an AI Incident due to direct harm caused by AI-generated content.
Thumbnail Image

Alexandria Ocasio-Cortez recounts horror of seeing herself in 'deepfake porn'

2024-04-09
The Guardian
Why's our monitor labelling this an incident or hazard?
The event involves an AI system generating deepfake pornographic content without consent, which has directly caused psychological harm to Alexandria Ocasio-Cortez, a person. This fits the definition of an AI Incident as it involves harm to a person (a). The AI system's role is pivotal in creating the harmful content. The article also references the broader societal impact and legislative responses, but the core event is the harm caused by the AI-generated deepfake, not just potential or future harm or complementary information. Therefore, the classification is AI Incident.