AI-Generated Deepfakes and Online Abuse Drive Women from Public Life

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A UN Women report reveals that AI-generated deepfakes and technologically advanced online abuse are increasingly targeting women journalists, activists, and human rights defenders globally. These AI-enabled attacks have led to psychological harm, self-censorship, and withdrawal from public life, undermining women's rights and participation.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions generative AI apps used to create non-consensual intimate images and deepfakes, which have caused realized harm including mental health issues (depression, anxiety, PTSD) and social harm (self-censorship, job loss). These harms fall under violations of human rights and harm to communities. The AI systems' malicious use is a direct contributing factor to these harms. The discussion of legal measures further supports the recognition of these harms as incidents rather than potential hazards or complementary information.[AI generated]
AI principles
Respect of human rightsFairness

Industries
Media, social platforms, and marketing

Affected stakeholders
WomenCivil society

Harm types
PsychologicalHuman or fundamental rights

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Online violence and deepfakes push women out of work, survey finds

2026-04-30
The Independent
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions generative AI apps used to create non-consensual intimate images and deepfakes, which have caused realized harm including mental health issues (depression, anxiety, PTSD) and social harm (self-censorship, job loss). These harms fall under violations of human rights and harm to communities. The AI systems' malicious use is a direct contributing factor to these harms. The discussion of legal measures further supports the recognition of these harms as incidents rather than potential hazards or complementary information.
Thumbnail Image

'Virtual rape': AI is silencing women in public life, UN report

2026-04-30
Euronews English
Why's our monitor labelling this an incident or hazard?
The report explicitly mentions AI-generated deepfakes and AI-powered abuse as tools used to harass and silence women, leading to realized harms such as psychological trauma, self-censorship, and reputational damage. These harms fall under violations of human rights and harm to communities. The AI system's role in generating manipulated content is pivotal to these harms, meeting the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

UN Women report finds online violence and deepfakes drive women from public life

2026-04-30
EurekAlert!
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-assisted rape, deepfakes, and AI-enabled online violence targeting women, causing significant psychological harm and social exclusion. The harms are realized and documented through a survey of affected women, showing direct impacts on mental health and public participation. The AI systems' misuse is central to the harm, fulfilling the criteria for an AI Incident under violations of rights and harm to communities.
Thumbnail Image

Online Violence Reports Against Women Journalists Double

2026-04-30
Mirage News
Why's our monitor labelling this an incident or hazard?
While AI-generated deepfakes are mentioned as a form of online violence contributing to harm against women journalists, the article does not describe a specific AI system's development, use, or malfunction directly causing an incident. Instead, it reports on aggregated data and trends regarding online abuse in the AI era. The focus is on raising awareness and the need for systemic responses rather than detailing a discrete AI Incident or an imminent AI Hazard. Therefore, this is best classified as Complementary Information providing context and understanding of AI's role in online violence against women.
Thumbnail Image

AI Heightens Abuse Against Women Journalists

2026-04-30
Mirage News
Why's our monitor labelling this an incident or hazard?
The report explicitly mentions AI-generated deepfake images as a form of abuse experienced by women journalists, which is a direct use of AI systems causing harm. The harms include mental health issues, self-censorship, and violations of rights, fulfilling the criteria for an AI Incident. The AI system's role is pivotal in making abuse easier and more damaging, leading to realized harm rather than just potential risk.
Thumbnail Image

Abuse of women journalists made 'easier and more damaging' by AI

2026-04-30
UN News
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated deepfakes and technologically sophisticated AI-enabled abuse that has caused real harm to women journalists, including diagnosed mental health conditions and forced self-censorship. The harms are realized and directly linked to AI system use in online violence. Therefore, this qualifies as an AI Incident under the definitions provided, as the AI system's use has directly led to violations of rights and harm to health and communities.
Thumbnail Image

Deepfakes, online violence driving women away from public life

2026-05-01
bizzbuzz.news
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-assisted 'virtual rape' and deepfakes being used to harass and silence women, causing real psychological harm and violations of rights. The AI systems' use in generating manipulated images and videos and facilitating online violence has directly led to harm (mental health issues, self-censorship, reputational damage). This fits the definition of an AI Incident as the AI system's use has directly led to harm to persons and violations of rights.