AI Misuse: Minors Create Deepfake Images of Peers

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A survey by Thorn reveals that 10% of minors in the U.S. report peers using AI to create deepfake nude images of other children. This misuse of AI technology has led to several student arrests and highlights significant concerns about online safety and child exploitation.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event describes actual misuse of AI systems (generative deepfake tools) to produce intimate images of others without consent, which has led to legal action and constitutes a violation of personal and sexual rights. The AI system’s use directly resulted in harmful, criminal behavior, fitting the definition of an AI Incident.[AI generated]
AI principles
SafetyPrivacy & data governanceRespect of human rightsAccountabilityHuman wellbeingRobustness & digital security

Industries
Education and trainingMedia, social platforms, and marketingDigital securityGovernment, security, and defenceHealthcare, drugs, and biotechnology

Affected stakeholders
Children

Harm types
PsychologicalReputationalHuman or fundamental rights

Severity
AI incident

AI system task:
Content generation

In other databases


Articles about this incident or hazard

Thumbnail Image

Uno de cada 10 menores de EU dice que amigos usan IA para crear desnudos de otros ni├▒os

2024-08-28
El Universal
Why's our monitor labelling this an incident or hazard?
The event describes actual misuse of AI systems (generative deepfake tools) to produce intimate images of others without consent, which has led to legal action and constitutes a violation of personal and sexual rights. The AI system’s use directly resulted in harmful, criminal behavior, fitting the definition of an AI Incident.
Thumbnail Image

Uno de cada diez menores de EEUU dice amigos usan IA para generar desnudos de otros ni├▒os

2024-08-28
infobae
Why's our monitor labelling this an incident or hazard?
The described activity involves actual misuse of generative AI to produce non-consensual sexualized imagery of minors, constituting direct harm to children’s physical and psychological well-being and legal violations. This fits the definition of an AI Incident because the AI system’s use has directly led to serious harm.
Thumbnail Image

Uno de cada 10 menores de EE. UU. dice que amigos usan IA para crear desnudos de otros ni├▒os

2024-08-28
www.diariolibre.com
Why's our monitor labelling this an incident or hazard?
The event involves direct misuse of an AI system (generative AI for deepfakes) causing harm—specifically child sexual exploitation and violation of minors’ rights. Arrests of students confirm the harm has materialized, making this an AI Incident under the framework.
Thumbnail Image

Uno de cada 10 menores de EEUU asegura tener amigos que usan la IA para generar desnudos de otros ni├▒os

2024-08-28
20 minutos
Why's our monitor labelling this an incident or hazard?
The event involves the malicious use of AI generative systems to produce and share nude deepfake images of minors, resulting in direct harm (sexual exploitation, violation of rights, creation/distribution of child sexual abuse material). These are concrete incidents of abuse facilitated by AI.
Thumbnail Image

Inteligencia Artificial hace deefakes sexuales de ni├▒os en EU

2024-08-28
Milenio.com
Why's our monitor labelling this an incident or hazard?
This involves direct harm (creation and distribution of sexual deepfakes of children) through the use of an AI system, causing violations of rights, exploitation, and legal actions. The harm is realized, so it qualifies as an AI incident.
Thumbnail Image

Crece el riesgo de abuso sexual infantil por el uso de IA generativa y deepfakes en menores | Ciudadanos | La Voz del Interior

2024-08-29
La Voz
Why's our monitor labelling this an incident or hazard?
The report documents actual cases of minors using AI-generated deepfakes to produce and disseminate explicit sexual images without consent, increasing real sexual abuse risks (e.g., sextortion). Generative AI tools are central to the creation and spread of non-consensual imagery of minors, directly leading to harmful outcomes. Therefore, this meets the definition of an AI Incident.
Thumbnail Image

Los menores usan apps de citas y comparten denudos, un hecho que no...

2024-08-28
europa press
Why's our monitor labelling this an incident or hazard?
The article reports that children are using AI generative tools to produce explicit, non-consensual deepfake images, a misuse of AI that has directly led to violations of minors’ rights and real harm (sextortion, abuse). This constitutes an AI Incident under the framework.
Thumbnail Image

Casi el 20% de los jóvenes confiesa haber usado una "app" de citas y entrar en páginas porno

2024-08-28
La Raz├│n
Why's our monitor labelling this an incident or hazard?
The use of AI generative tools to produce realistic nude images of underage peers without consent constitutes actual harm (sexual exploitation and abuse risk). Although conveyed via a research survey, it describes realized AI-driven harms (non-consensual deepfakes of minors), meeting the AI Incident criteria.
Thumbnail Image

Uno de cada diez menores estadounidenses dice que amigos usan IA para generar desnudos de otros ni├▒os, revela una encuesta

2024-08-28
Vanguardia
Why's our monitor labelling this an incident or hazard?
The described events involve the active use of generative AI by minors to produce and share non-consensual sexual deepfakes of other children and teachers, which directly constitutes harm (child sexual abuse imagery and privacy violations). Arrests and investigations have already occurred, confirming realized harm tied to AI misuse.
Thumbnail Image

Sextorsión y deepfakes: Alerta por uso de IA entre menores para crear imágenes sexuales

2024-08-28
www.elcolombiano.com
Why's our monitor labelling this an incident or hazard?
The article presents aggregated survey findings about minors’ behaviors and risks, including actual harms (sextortion, non-consensual images) and the use of AI deepfake tools, but it does not report a discrete new incident or an emergent hazard scenario. Instead, it provides context and evidence on existing issues, fitting the definition of Complementary Information.
Thumbnail Image

¡Preocupante! Menores en EE.UU. usan IA para crear desnudos de otros niños

2024-08-28
www.vanguardia.com
Why's our monitor labelling this an incident or hazard?
Generative AI tools have been misused by students to produce explicit deepfake imagery of peers without consent. This constitutes direct harm and violation of rights (sexual abuse of minors), with documented arrests and investigations. Therefore, it is an AI Incident.
Thumbnail Image

Menores revelan que sus amigos usan inteligencia artificial para crear desnudos de otros ni├▒os

2024-08-29
Primera Hora
Why's our monitor labelling this an incident or hazard?
The event involves direct use of AI systems (generative deepfake tools) to produce non-consensual sexual images of minors, causing clear harm—child sexual abuse and exploitation—violating human rights and protections. Multiple students have already been arrested for these AI-driven offenses, confirming that the harm has occurred rather than being a mere risk.
Thumbnail Image

Importante Estudio Pone Alerta a los Padres: Niños Usan IA para Generar Desnudos de sus Amigos | N+

2024-08-28
N+
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI generative technology by minors to create nude deepfake images of other children without consent, which is a clear violation of rights and causes harm to the individuals involved. The AI system's use is central to the harm described, fulfilling the criteria for an AI Incident involving violations of human rights and harm to communities. The involvement is through the use of AI systems to generate harmful content, directly leading to realized harm.
Thumbnail Image

Ago 28, 2024

2024-08-28
La Neta Neta
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI generative tools to create deepfake nude images of minors without consent, which is a clear violation of rights and constitutes harm to children. The involvement of AI in generating these images is direct and pivotal to the harm. The harms include violations of fundamental rights and harm to individuals (children), fitting the definition of an AI Incident. The arrests and investigations further confirm that harm has occurred, not just potential harm.
Thumbnail Image

Citas virtuales y desnudos: la preocupante tendencia entre los más jóvenes que debemos abordar

2024-08-28
Qué!
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of generative AI to create realistic deepfake images of minors, which are then used in harmful ways such as sextortion and non-consensual sharing. These harms fall under violations of rights and harm to communities. The AI system's use is directly linked to these harms, fulfilling the criteria for an AI Incident. The article also discusses the real impact on victims, not just potential risks, confirming that harm has occurred.
Thumbnail Image

Sextorsión y deepfakes: Alerta por uso de IA entre menores para crear imágenes sexuales | Noticias de Norte de Santander, Colombia y el mundo

2024-08-28
Noticias de Norte de Santander, Colombia y el mundo
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of generative AI to create realistic deepfake images of minors without consent, which is linked to sexual abuse risks and sextortion. The AI system's involvement in generating these images directly contributes to harm to children and adolescents, including violations of their rights and exposure to abuse. This meets the criteria for an AI Incident as the AI system's use has directly led to harm (violation of rights and harm to vulnerable groups). The article also discusses the prevalence and impact of these harms, confirming that the harm is realized, not just potential.
Thumbnail Image

Kids want social media apps to do more to protect them from the spread of deepfake nudes

2024-09-01
Business Insider
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses the use of generative AI systems to create deepfake nudes of minors, which leads to real harm such as sexual exploitation, sextortion, and revictimization of victims. The harms described include violations of rights and harm to communities (children and minors). Since the AI system's use has directly led to these harms, this qualifies as an AI Incident. The article does not merely warn about potential harm but documents ongoing harm and victimization caused by AI-generated content.
Thumbnail Image

More than 1 in 10 students say they know of peers who created deepfake nudes, report says

2024-08-29
Los Angeles Times
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-powered 'undressing' programs used to create deepfake nudes of students without their consent, which is a direct violation of rights and causes harm to the individuals depicted and their communities. The harm is realized and ongoing, as evidenced by the survey data and reports of increased hotline contacts. The AI system's use in generating these images is central to the harm described, fulfilling the criteria for an AI Incident involving violations of rights and harm to communities. This is not merely a potential risk but an active, documented harm.
Thumbnail Image

1 in every 10 minors uses AI to generate nude images of their classmates & share online, finds survey

2024-08-28
Firstpost
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of generative AI technologies by minors to create non-consensual nude images, which is a direct misuse of AI systems leading to harm to individuals (a form of sexual exploitation and violation of rights). The harm is realized, not just potential, as the images are being created and shared. This fits the definition of an AI Incident because the AI system's use has directly led to violations of human rights and harm to communities. The survey findings and reported incidents confirm the presence of harm caused by AI misuse.
Thumbnail Image

Kids want social media apps to do more to protect them from the spread of deepfake nudes

2024-09-01
Business Insider Nederland
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions generative AI being used to create deepfake nudes of children, which is a direct misuse of AI technology causing harm to individuals (minors) and communities. This harm includes violations of privacy, potential sexual exploitation, and psychological trauma. The involvement of AI in generating these images and the resulting harms meet the criteria for an AI Incident, as the AI system's use has directly led to significant harm to persons and communities. The article also discusses the preferences of children for better safety tools, but the primary focus is on the realized harm caused by AI-generated deepfake nudes.
Thumbnail Image

1 in 10 Minors Say Their Friends Use AI to Generate Nudes of Other Kids, Survey Finds

2024-08-28
404 Media
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (AI tools that generate nude images) being used by minors to create nonconsensual explicit images of other minors, which is a direct violation of rights and causes harm to individuals and communities. The harm is realized and ongoing, as evidenced by the survey results and related incidents such as police investigations and arrests. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm (violation of rights and harm to communities).
Thumbnail Image

Kids want social media apps to do more to protect them from the spread of deepfake nudes

2024-09-01
DNyuz
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions generative AI being used to create deepfake nudes of children, which constitutes a direct violation of rights and causes harm to the victims. The harms are realized and ongoing, including sextortion and revictimization. The AI system's role in generating abusive content is pivotal to the harm described. Therefore, this qualifies as an AI Incident under the framework, as the development and use of AI systems have directly led to harm to a group of people (children).