Teacher Forced to Quit After Colleague Creates and Distributes Deepfake Pornography

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Kirsty Pellant, a primary school teacher in the UK, was forced to quit her job after a colleague used AI deepfake technology to create and distribute non-consensual pornographic images of her online. The incident led to stalking, harassment, and severe emotional and professional harm.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event explicitly involves deepfake technology, which is an AI system capable of generating realistic fake images or videos. The malicious use of this AI system by a colleague to create and distribute non-consensual pornographic content caused direct harm to the victims, including violation of their rights, emotional distress, and loss of employment. This fits the definition of an AI Incident as the AI system's use directly led to harm to persons and violation of rights.[AI generated]
AI principles
Privacy & data governanceRespect of human rights

Industries
Education and training

Affected stakeholders
WorkersWomen

Harm types
PsychologicalReputationalEconomic/Property

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

I had to quit my job after a colleague spread deepfake porn of me

2026-03-06
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
The event explicitly involves deepfake technology, which is an AI system capable of generating realistic fake images or videos. The malicious use of this AI system by a colleague to create and distribute non-consensual pornographic content caused direct harm to the victims, including violation of their rights, emotional distress, and loss of employment. This fits the definition of an AI Incident as the AI system's use directly led to harm to persons and violation of rights.
Thumbnail Image

I found deep fakes of me on sick porn sites & exposed predator's evil network

2026-03-08
The Sun
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake images and videos, which are a form of AI system output used maliciously. The harm caused includes violations of privacy, emotional and psychological harm, reputational damage, and stalking, all of which fall under violations of human rights and harm to individuals. The AI system's role is pivotal as it enabled the creation of realistic fake content that was used to harass and harm the victim and others. The incident has materialized harm, not just potential harm, and involves criminal activity linked to AI misuse. Therefore, this qualifies as an AI Incident.
Thumbnail Image

'I was forced to quit my job after my colleague deepfaked my images' - The Mirror

2026-03-06
Mirror
Why's our monitor labelling this an incident or hazard?
The article describes the use of deepfake AI technology to create fake pornographic images of a woman without her consent, which were then distributed online. This use of AI directly led to stalking offenses, harassment, and severe personal and professional consequences for the victim, including loss of employment. The AI system's role in generating the harmful content is pivotal to the incident, fulfilling the criteria for an AI Incident due to violations of rights and harm to the individual.
Thumbnail Image

Teacher opens up after quitting job when colleague deepfaked her images | Wales Online

2026-03-06
WalesOnline
Why's our monitor labelling this an incident or hazard?
The event explicitly describes the use of AI-based deepfake technology to create fake pornographic images of the victim, which were then distributed without consent. This misuse caused direct harm to the victim's reputation, emotional well-being, and professional life, fulfilling the criteria for an AI Incident under violations of human rights and harm to the individual. The involvement of AI in generating the deepfake images and the resulting realized harm justifies classification as an AI Incident.
Thumbnail Image

'My colleague deepfaked my images which left me quitting my job at a school' - Cornwall Live

2026-03-06
Cornwall Live
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI (deepfake technology) to create manipulated images that caused direct harm to the victim. The misuse of AI-generated content led to stalking, harassment, and significant personal and professional consequences. The AI system's use directly contributed to the harm experienced, fulfilling the criteria for an AI Incident. The article describes realized harm rather than potential harm, so it is not an AI Hazard or Complementary Information. It is not unrelated because AI deepfake technology is central to the incident.