AI-Facilitated Sexual Violence Against Children in Brazil

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A UNICEF-led report reveals that 19% of Brazilian children and adolescents (about 3 million) experienced technology-facilitated sexual violence in one year. AI systems were used to manipulate images, generate sexualized content, and enable abuse via social media and messaging platforms, causing significant psychological harm.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions the use of generative AI to create sexual images or videos of children and adolescents without consent, which is a direct violation of human rights and causes significant harm to the victims. The harm is realized and documented, including mental health impacts and increased risk of self-harm and suicidal thoughts. The AI system's involvement in producing harmful content that leads to these outcomes qualifies this event as an AI Incident under the OECD framework.[AI generated]
AI principles
Respect of human rightsPrivacy & data governance

Industries
Media, social platforms, and marketing

Affected stakeholders
Children

Harm types
PsychologicalHuman or fundamental rights

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

3 mi de crianças e adolescentes sofreram violência sexual online no Brasil

2026-03-04
Poder360
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of generative AI to create sexual images or videos of children and adolescents without consent, which is a direct violation of human rights and causes significant harm to the victims. The harm is realized and documented, including mental health impacts and increased risk of self-harm and suicidal thoughts. The AI system's involvement in producing harmful content that leads to these outcomes qualifies this event as an AI Incident under the OECD framework.
Thumbnail Image

Um a cada 5 adolescentes foi vítima de violência sexual em sites como Roblox, Free Fire e Instagram

2026-03-04
O TEMPO
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions that 3% of children and adolescents reported that AI was used to create sexual images or videos of their likeness, indicating direct involvement of AI systems in producing harmful content. This constitutes a clear AI Incident as the AI system's use has directly led to harm (sexual exploitation and abuse) of individuals, fulfilling the criteria for an AI Incident under violations of human rights and harm to communities. The report also discusses the broader context of technology-facilitated sexual abuse, but the AI-generated content aspect is a direct harm caused by AI.
Thumbnail Image

Uma a cada cinco crianças e adolescentes sofreu violência sexual facilitada pela tecnologia em um ano, revela estudo

2026-03-04
Folha de Boa Vista: Not�cias, Imagens, V�deos e Entrevistas
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI generative tools to create fake sexual images of minors, which constitutes a direct violation of rights and causes significant psychological harm. The AI system's role is pivotal in facilitating this harm through the generation of non-consensual explicit content. The article documents realized harm (sexual exploitation and abuse facilitated by technology including AI), meeting the criteria for an AI Incident. The involvement of AI is explicit and linked to direct harm to individuals' health and rights, fulfilling the definitions for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Pesquisa aponta que 19% dos adolescentes no Brasil sofreram abuso sexual online - Bahia Notícias

2026-03-04
Marcelo Bittencourt
Why's our monitor labelling this an incident or hazard?
The involvement of generative AI as a tool used by abusers to facilitate sexual exploitation constitutes the use of an AI system that has directly led to harm to a group of people (adolescents). Since the harm has already occurred and AI played a role in enabling it, this qualifies as an AI Incident under the framework's definition of harm to people facilitated by AI.
Thumbnail Image

Uma em cada cinco crianças no Brasil sofre violência sexual facilitada por tecnologia, diz Unicef

2026-03-04
Valor Econômico
Why's our monitor labelling this an incident or hazard?
The report explicitly links technology-facilitated sexual violence against children, involving platforms that use AI systems for content management and user interaction. The harm is realized and significant, including exposure to unsolicited sexual content and exploitation. The AI systems' role in content dissemination and potential failure to adequately protect users makes this an AI Incident under the framework, as it involves violations of rights and harm to communities directly linked to AI system use.
Thumbnail Image

1 em cada 5 adolescentes brasileiros relata ter sofrido exploração ou abuso sexual online, aponta Unicef - Jornal de Brasília

2026-03-04
Jornal de Brasília
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI and digital technologies (including AI tools) to facilitate sexual exploitation and abuse of adolescents, which is a direct harm to the health and well-being of individuals (harm category a). The involvement of AI systems is explicit in the report's mention of AI tools as part of the technology facilitating abuse. The harm is realized and significant, meeting the criteria for an AI Incident. The article does not merely warn of potential harm but documents actual cases and impacts, thus it is not a hazard or complementary information. It is not unrelated because AI systems are explicitly involved in the facilitation of harm.
Thumbnail Image

1 em cada 5 adolescentes no Brasil sofrem violência sexual na internet

2026-03-04
Vvale
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems embedded in social media and messaging platforms that facilitate the occurrence of sexual violence against adolescents. The AI systems' role in content dissemination, recommendation, and user interaction indirectly leads to harm (sexual violence, exploitation, exposure to harmful content). The harm is realized and significant, affecting millions of adolescents, thus meeting the criteria for an AI Incident. The article does not merely discuss potential risks or responses but reports on actual harm facilitated by AI systems.
Thumbnail Image

Brasil tem 3 milhões de crianças e adolescentes vítimas de violência sexual online em um ano - Jornal O Sul

2026-03-05
Jornal O Sul
Why's our monitor labelling this an incident or hazard?
The involvement of AI is explicit in the creation of sexualized images or videos using the victims' appearances, which constitutes a direct use of AI systems to generate harmful content. This use has directly led to harm to the victims, including psychological harm such as anxiety, self-harm, and suicidal thoughts, fitting the definition of an AI Incident. The AI system's role is pivotal in facilitating this form of abuse and exploitation online.
Thumbnail Image

1 em cada 5 adolescentes brasileiros sofre violência sexual na internet, aponta Unicef - Metro 1

2026-03-04
Caixa paga Bolsa Família a beneficiários com NIS final 2 - Metro 1
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used to manipulate images, which is explicitly mentioned as part of the sexual violence experienced by adolescents. This manipulation contributes directly to harm (sexual violence, exploitation, threats, and coercion), violating human rights and causing significant harm to individuals and communities. The AI system's use in this context is a contributing factor to the harm, meeting the criteria for an AI Incident.
Thumbnail Image

Abuso sexual facilitado por meio digital atinge 19% de crianças

2026-03-04
FA NOTÍCIAS | Notícias de São Mateus e Espírito Santo
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions that 3% of children reported their appearance was used by AI to generate sexual content, indicating direct AI involvement in harmful activities. This constitutes a violation of human rights and causes harm to individuals and communities, fitting the definition of an AI Incident. The harm is realized, not just potential, as the abuse and exploitation have occurred. The article also discusses the role of digital platforms and AI tools in facilitating abuse, confirming the AI system's role in the harm. Hence, the event is classified as an AI Incident.