Essex Man Jailed for AI-Generated Deepfake Pornography

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Brandon Tyler, a 26-year-old bar worker from Braintree, Essex, was sentenced to five years for using AI to create and distribute deepfake pornographic images of 20 women. His actions, involving non-consensual explicit content and online harassment, were condemned as toxic masculinity and a severe violation of human rights.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly states that the perpetrator used AI to create deepfake images, which were shared online to harass and degrade 20 women. This use of AI directly led to harm, including emotional distress, harassment, and violation of privacy and rights. The involvement of AI in generating harmful content and the resulting realized harm to individuals and communities fits the definition of an AI Incident under violations of human rights and harm to communities.[AI generated]
AI principles
Respect of human rightsPrivacy & data governanceHuman wellbeingSafetyTransparency & explainabilityRobustness & digital securityAccountability

Industries
Media, social platforms, and marketingDigital security

Affected stakeholders
Women

Harm types
Human or fundamental rightsPsychologicalReputational

Severity
AI incident

AI system task:
Content generation

In other databases

Articles about this incident or hazard

Thumbnail Image

Vile Essex man jailed for sharing horrendous and 'degrading' images of 20 women

2025-04-04
Essex Live
Why's our monitor labelling this an incident or hazard?
The article explicitly states that the perpetrator used AI to create deepfake images, which were shared online to harass and degrade 20 women. This use of AI directly led to harm, including emotional distress, harassment, and violation of privacy and rights. The involvement of AI in generating harmful content and the resulting realized harm to individuals and communities fits the definition of an AI Incident under violations of human rights and harm to communities.
Thumbnail Image

Pictured: the barman who used AI to create deepfake porn images of women and girls

2025-04-04
Braintree and Witham Times
Why's our monitor labelling this an incident or hazard?
The article explicitly states that the perpetrator used AI to create deepfake images, which is an AI system's use leading to direct harm. The harms include violations of privacy, harassment, and emotional distress to the victims, which fall under violations of human rights and harm to communities. The AI system's use was central to the incident, making this an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Essex man who used AI to create deepfake pornography is jailed

2025-04-04
BBC
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI was used to create deepfake pornography, which is a form of manipulated content that infringes on individuals' rights and causes harassment. The harm has materialized as the perpetrator was convicted and jailed for offenses related to harassment and sharing intimate images without consent. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's malicious use.
Thumbnail Image

"My depression spiralled": Women speak out after vile Essex man jailed for sharing deepfake images - The Mirror

2025-04-05
The Mirror
Why's our monitor labelling this an incident or hazard?
The article explicitly states that the perpetrator used AI to create deepfake nude images of women, which were shared without consent, causing emotional and psychological harm. This constitutes a violation of rights and harm to individuals, fitting the definition of an AI Incident. The AI system's use was central to the harm caused, as it enabled the creation of realistic fake images that led to distress and harassment of the victims.
Thumbnail Image

Women speak out after deep fake nudes created 'without knowledge or consent'

2025-04-05
Essex Live
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI (deep fake technology) to create manipulated nude images without consent, which were then shared online, causing direct harm to the victims' mental health, privacy, and dignity. This constitutes a violation of human rights and inflicts harm on individuals and communities. The harm is realized, not just potential, and the AI system's use was central to the offense. Therefore, this qualifies as an AI Incident under the framework.
Thumbnail Image

Sick freak uses AI to terrorise 20 women with fake naked pics as he plasters them online - Daily Star

2025-04-05
Daily Star
Why's our monitor labelling this an incident or hazard?
The use of AI deepfake software to create non-consensual fake nude images constitutes the use of an AI system that directly caused harm to individuals, including violations of privacy, harassment, and emotional trauma. The event involves realized harm (emotional and psychological) caused by the AI-generated content, fitting the definition of an AI Incident due to violations of human rights and harm to persons.
Thumbnail Image

Man jailed for using AI to create deepfake porn of 20 victims - National Daily Press

2025-04-06
National Daily Press
Why's our monitor labelling this an incident or hazard?
The article explicitly states that the perpetrator used artificial intelligence to create deepfake images, which is an AI system involvement. The use of AI to generate non-consensual explicit images and distribute them constitutes a violation of human rights and causes harm to the victims' mental health and privacy. This meets the criteria for an AI Incident as the AI system's use directly led to harm (violation of rights and emotional distress).