Normandy Deepfake Pornography Incident Involving Minors

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Authorities in Saint-Hilaire-du-Harcouët, Normandy, have launched an investigation after deepfake pornographic videos featuring twelve schoolgirls from the private Immaculée Conception were distributed. The perpetrators used AI technology to superimpose minors' faces in explicit scenes, violating privacy and human rights.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event explicitly mentions the creation and distribution of deepfake videos, which are AI-generated synthetic content. The harm is realized as the victims are identified and the videos are of sexual nature, causing personal and community harm. The AI system's use in generating these deepfakes is central to the incident, fulfilling the criteria for an AI Incident due to violations of rights and harm to individuals.[AI generated]
AI principles
Privacy & data governanceRespect of human rightsSafetyHuman wellbeingAccountabilityRobustness & digital security

Industries
Education and trainingMedia, social platforms, and marketingDigital security

Affected stakeholders
Children

Harm types
Human or fundamental rightsPsychologicalReputationalPublic interest

Severity
AI incident

AI system task:
Content generationRecognition/object detection


Articles about this incident or hazard

Thumbnail Image

Manche: enquête ouverte après la diffusion de " deepfakes " visant des collégiennes

2025-03-12
Mediapart
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the creation and distribution of deepfake videos, which are AI-generated synthetic content. The harm is realized as the victims are identified and the videos are of sexual nature, causing personal and community harm. The AI system's use in generating these deepfakes is central to the incident, fulfilling the criteria for an AI Incident due to violations of rights and harm to individuals.
Thumbnail Image

Une enquête ouverte dans la Manche après la diffusion de " deepfakes " visant des collégiennes

2025-03-12
Le Télégramme
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI-generated deepfake technology to create and distribute harmful sexual videos targeting minors, which constitutes a violation of rights and causes harm to individuals and communities. The AI system's use in generating these deepfakes has directly led to harm, qualifying this as an AI Incident under the framework.
Thumbnail Image

En Normandie, une enquête ouverte après la diffusion de " deepfakes " visant des collégiennes

2025-03-12
Paris Normandie
Why's our monitor labelling this an incident or hazard?
Deepfakes are AI-generated synthetic media that manipulate or fabricate realistic images or videos. The creation and dissemination of sexual deepfake videos without consent directly harms the victims' dignity, privacy, and rights, constituting a violation of human rights and causing harm to the community. The involvement of AI in generating these deepfakes is explicit, and the harm has already occurred, meeting the criteria for an AI Incident.
Thumbnail Image

"Deepfake" chez les ados : on vous explique le phénomène de ces vidéos pornographiques truquées et pourquoi elles sont dangereuses

2025-03-12
France 3 Hauts-de-France
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the use of AI-based deepfake technology to create pornographic videos falsely depicting young girls, including minors, which is a direct violation of their rights and causes significant harm to their reputation and privacy. The AI system's use in generating these videos is central to the harm. This meets the criteria for an AI Incident because the AI system's use has directly led to violations of rights and harm to individuals and communities. The ongoing investigation and the number of victims confirm that the harm is materialized, not just potential.
Thumbnail Image

Des plaintes déposées dans une affaire de "deepfake" dont sont victimes des collégiennes du Sud-Manche - ici

2025-03-12
ici par France Bleu et France 3
Why's our monitor labelling this an incident or hazard?
The article describes an incident where deepfake technology, an AI system capable of generating realistic manipulated videos, was used maliciously to create pornographic content involving minors. This constitutes a violation of rights and harm to individuals, fulfilling the criteria for an AI Incident. The involvement of AI in the creation of the harmful content and the resulting direct harm to the victims justifies classification as an AI Incident.
Thumbnail Image

Enquête ouverte après la diffusion d'une vidéo truquée impliquant des collégiennes

2025-03-12
Angers Info
Why's our monitor labelling this an incident or hazard?
The article describes the use of deepfake technology, an AI system capable of generating realistic manipulated videos, to create pornographic content involving minors. This constitutes a violation of human rights and a significant harm to the victims. The AI system's use directly led to the harm, fulfilling the criteria for an AI Incident under the framework.
Thumbnail Image

Des collégiennes victimes de deepfakes à caractère sexuel, une enquête ouverte

2025-03-12
La Voix du Nord
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions deepfakes created using AI, which are false videos with sexual content targeting minors. The sharing of these videos on social media has caused harm to the victims, including violations of their rights and psychological harm. This fits the definition of an AI Incident as the AI system's use has directly led to harm (violation of rights and harm to communities). The investigation confirms the harm is realized, not just potential.
Thumbnail Image

Diffusion de "deepfakes" visant une douzaine de collégiennes dans la Manche, une enquête ouverte

2025-03-12
La Provence
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the creation and dissemination of deepfake videos, which are AI-generated synthetic media. The harm is realized as the victims are subjected to sexualized fake videos, violating their rights and causing psychological and social harm. The involvement of AI in generating deepfakes directly leads to harm to the individuals and communities involved, fitting the definition of an AI Incident.
Thumbnail Image

Enquête ouverte après la diffusion de " deepfakes " sexuels visant des collégiennes dans la Manche

2025-03-12
leparisien.fr
Why's our monitor labelling this an incident or hazard?
The article describes the creation and dissemination of sexual deepfake videos of schoolgirls, which are AI-generated synthetic media. This use of AI has directly caused harm to the victims (minors) by violating their privacy and dignity, and has broader social harm implications. The involvement of AI in generating deepfakes is explicit, and the harm is realized, not just potential. Therefore, this qualifies as an AI Incident under the framework, specifically under violations of human rights and harm to communities.
Thumbnail Image

Manche: enquête ouverte après la diffusion d'une vidéo à caractère sexuel truquée avec des collégiennes

2025-03-12
BFMTV
Why's our monitor labelling this an incident or hazard?
The article explicitly states that an AI system was used to fabricate deepfake videos showing collégiennes (minors) in sexual scenes, which is a clear violation of human rights and causes harm to the victims. The dissemination of such content has already occurred, with multiple victims identified and legal action underway. This meets the criteria for an AI Incident because the AI system's use directly led to harm (violation of rights and harm to individuals and communities).
Thumbnail Image

Deepfakes : des collégiennes victimes d'usurpation d'identité à caractère sexuel

2025-03-12
CNEWS
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to create deepfake images of minors for sexual purposes, which constitutes a violation of human rights and causes harm to the victims. The AI system's use directly led to the harm, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, as the images circulated and victims were identified and reported the abuse.
Thumbnail Image

Manche: des "deepfakes" à caractère sexuel visent douze collégiennes, une enquête ouverte

2025-03-13
Le Figaro
Why's our monitor labelling this an incident or hazard?
The article describes the use of AI-generated deepfake technology to create sexual images of minors without their consent, which is a direct violation of their rights and causes significant harm. The involvement of AI systems in generating these deepfakes is explicit and central to the incident. The harm is realized, not just potential, as the images have been disseminated and an investigation is underway. Therefore, this qualifies as an AI Incident under the framework definitions.
Thumbnail Image

Une douzaine de collégiennes victimes de deepfakes à caractère sexuel dans la Manche

2025-03-13
20 Minutes
Why's our monitor labelling this an incident or hazard?
The article describes an incident where AI-generated deepfake videos of sexual nature have been created and disseminated targeting at least a dozen schoolgirls. The use of AI to produce these fake videos without consent is a clear violation of human rights and causes harm to the victims. The involvement of AI in generating the harmful content and the resulting impact on the victims meets the criteria for an AI Incident, as the harm is realized and directly linked to the AI system's use.
Thumbnail Image

" Deepfakes " : 12 collégiennes piégées dans une vidéo porno diffusée sur les réseaux sociaux

2025-03-13
Courrier picard
Why's our monitor labelling this an incident or hazard?
The event explicitly involves deepfake technology, which is an AI system capable of generating realistic fake videos. The use of this AI system has directly led to harm by creating and spreading sexualized fake videos of minors without consent, which is a clear violation of rights and causes significant harm to the victims. The involvement of AI in the creation of these videos and the resulting harm meets the criteria for an AI Incident under the OECD framework.
Thumbnail Image

Dans la Manche, des collégiennes victimes de "deepfakes" à caractère sexuel

2025-03-13
rtl.fr
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI to generate deepfake videos without consent, depicting sexual content involving minors. This constitutes a violation of human rights and causes harm to the individuals involved. The AI system's role is pivotal in creating these harmful fake videos, making this an AI Incident under the framework's definition of harm to persons and violation of rights.
Thumbnail Image

Manche: des collégiennes victimes de "deepfakes" à caractère sexuel, une enquête ouverte

2025-03-13
RMC
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the creation and circulation of deepfake videos, which are AI-generated synthetic media. The harm is realized as the videos are sexual in nature and involve minors, causing psychological and reputational harm, thus violating their rights. The involvement of AI in generating deepfakes and the resulting harm to the victims qualifies this as an AI Incident under the framework, specifically under violations of human rights and harm to communities.
Thumbnail Image

Manche : des collégiennes victimes de deepfakes à caractère sexuel

2025-03-13
Linfo.re
Why's our monitor labelling this an incident or hazard?
Deepfakes are AI-generated synthetic media that can realistically depict individuals in fabricated scenarios. The creation and distribution of sexual deepfakes targeting minors constitute a violation of their rights and cause significant harm. Since the event describes actual harm occurring due to the use of AI systems (deepfake generation), it qualifies as an AI Incident under the framework, specifically under violations of human rights and harm to communities.
Thumbnail Image

Une enquête ouverte après la diffusion de vidéos "deepfakes" à caractère sexuel visant des collégiennes | TF1 INFO

2025-03-13
TF1 INFO
Why's our monitor labelling this an incident or hazard?
The article describes the creation and dissemination of deepfake videos, which are AI-generated synthetic media, depicting sexual content involving real individuals without their consent. This use of AI has directly caused harm to the victims, including minors, violating their rights and causing significant personal and social harm. The involvement of AI in producing these videos and the resulting harm meets the criteria for an AI Incident under the OECD framework.
Thumbnail Image

"Deepfakes" à caractère porno visant 12 collégiennes: pourquoi l'enquête s'annonce compliquée

2025-03-14
RMC
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used to create deepfake videos, which are manipulated video content generated by AI. The harm is realized as the victims' images are degraded and their privacy and dignity violated, constituting a violation of human rights and harm to the community. The AI system's use directly led to this harm, making this an AI Incident under the framework.