Georgia Harrison Confronts AI-Driven Revenge Porn

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Georgia Harrison, whose ex Stephen Bear was jailed for disclosing her private sex tape without consent, continues to struggle with its widespread online availability. In her ITV documentary Porn, Power, Profit, she investigates AI-enabled deepfake porn and image-based sexual abuse, tracking down postings and urging government and tech platforms to strengthen protections.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article details a real harm caused by the illegal sharing of private sexual content, which is a violation of rights and personal privacy. The mention of the deep fake porn industry implies the involvement of AI systems in creating or distributing manipulated sexual content. The ongoing widespread distribution of the video and the investigation into the sources and advertising around it indicate direct harm linked to AI-enabled technologies. Hence, this is an AI Incident as the AI system's use or misuse has directly led to harm to an individual and potentially others in similar situations.[AI generated]
AI principles
Privacy & data governanceRespect of human rightsSafetyAccountability

Industries
Media, social platforms, and marketing

Affected stakeholders
General public

Harm types
PsychologicalReputationalHuman or fundamental rights

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Georgia Harrison's 'struggle' at how 'widespread' her sex tape is

2025-02-10
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
The article details a real harm caused by the illegal sharing of private sexual content, which is a violation of rights and personal privacy. The mention of the deep fake porn industry implies the involvement of AI systems in creating or distributing manipulated sexual content. The ongoing widespread distribution of the video and the investigation into the sources and advertising around it indicate direct harm linked to AI-enabled technologies. Hence, this is an AI Incident as the AI system's use or misuse has directly led to harm to an individual and potentially others in similar situations.
Thumbnail Image

Georgia Harrison shares revenge porn 'struggle'

2025-02-11
Yahoo
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI-generated deepfake videos as a form of image-based sexual abuse causing significant emotional and privacy harm to victims. The sharing of private videos without consent, including AI-generated fakes, constitutes a violation of rights and causes ongoing harm. The involvement of AI in creating fake videos that evoke the same emotional harm as real videos meets the criteria for an AI Incident, as the AI system's use has directly led to harm to individuals and communities. The article also references legal frameworks addressing these harms, reinforcing the recognition of the issue as a realized harm rather than a potential one.
Thumbnail Image

Georgia Harrison 'Really Struggles' With Her Adult Video Still Easily Available Online

2025-02-11
TimesNow
Why's our monitor labelling this an incident or hazard?
While the article references deepfakes and image-based sexual abuse, which often involve AI technologies, the primary event described is the ongoing availability of non-consensual explicit content and its emotional impact on Georgia Harrison. There is no direct or indirect indication that an AI system's development, use, or malfunction caused this harm. The documentary's discussion of AI-related issues is contextual and does not report a new AI Incident or Hazard. Therefore, this is best classified as Complementary Information providing context on AI-related societal issues rather than a direct AI Incident or Hazard.
Thumbnail Image

Georgia Harrison opens up about Stephen Bear video

2025-02-11
GEO TV
Why's our monitor labelling this an incident or hazard?
The event involves the use and misuse of AI-related technology (deepfakes and online dissemination platforms likely using AI for content distribution) that has directly led to harm to a person (image-based sexual abuse and privacy violation). The convicted sharing of the private video and the ongoing widespread availability of the video online constitute realized harm. The article's focus on the documentary about deepfakes and image-based sexual abuse further supports the involvement of AI systems in causing or exacerbating harm. Therefore, this qualifies as an AI Incident due to direct harm to an individual's rights and well-being caused by AI-enabled or AI-related misuse.
Thumbnail Image

Georgia Harrison fronts ITV sexual abuse documentary

2025-02-11
BBC
Why's our monitor labelling this an incident or hazard?
The event involves an AI system in the form of deepfake technology used to create and distribute manipulated sexual content without consent, causing direct harm to the individual and others similarly affected. This fits the definition of an AI Incident because the AI system's use has directly led to harm (violation of rights and personal harm). The documentary and legislative context provide complementary information but do not overshadow the primary harm described. Therefore, the classification is AI Incident.