AI Generates Fetishised Images of Disabled Women, Sparking Outrage

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

AI systems have been used to create and manipulate sexualised, fetishised images of women with disabilities and genetic conditions, including Down syndrome, vitiligo, and albinism. British charities and disability advocates condemned the trend, citing exploitation, misinformation, and harm to vulnerable communities. The deceptive images are often not labelled as AI-generated.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves AI systems generating manipulated images that sexualize and fetishize women with disabilities, which directly leads to harm by spreading misinformation and offensive content. The involvement of AI in creating deceptive and harmful images that exploit vulnerable groups fits the definition of an AI Incident, as it causes violations of human rights and harm to communities. The harm is realized and ongoing, not merely potential, and the AI's role is pivotal in producing and disseminating this content.[AI generated]
AI principles
FairnessRespect of human rights

Industries
Media, social platforms, and marketing

Affected stakeholders
Women

Harm types
PsychologicalHuman or fundamental rightsReputational

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Now AI Being Used To Make 'Fetishised' Images Of Women With Disabilities

2026-03-27
NDTV
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating manipulated images that sexualize and fetishize women with disabilities, which directly leads to harm by spreading misinformation and offensive content. The involvement of AI in creating deceptive and harmful images that exploit vulnerable groups fits the definition of an AI Incident, as it causes violations of human rights and harm to communities. The harm is realized and ongoing, not merely potential, and the AI's role is pivotal in producing and disseminating this content.
Thumbnail Image

AI used to make 'fetishized' images of disabled women

2026-03-27
Le Monde.fr
Why's our monitor labelling this an incident or hazard?
AI systems are explicitly involved in generating and manipulating images that sexualize and fetishize disabled women, which has led to realized harm including offense, exploitation, misinformation, and potential psychological harm to vulnerable groups. The use of AI in this context directly contributes to violations of rights and harm to communities, meeting the criteria for an AI Incident rather than a hazard or complementary information. The article details ongoing harm rather than potential or mitigated risks.
Thumbnail Image

Global outcry as AI used to create 'fetishised' images of disabled women

2026-03-27
IOL
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated images that sexualize and fetishize disabled women, causing harm through exploitation and misinformation. The AI systems' use in creating deceptive and harmful content directly leads to violations of rights and harm to communities, fulfilling the criteria for an AI Incident. The harm is realized and ongoing, not merely potential, and the AI system's role is pivotal in generating the harmful content.
Thumbnail Image

Harmful fantasies: How AI is fetishising women with disabilities

2026-03-27
Malay Mail
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used to generate manipulated images that sexualize and fetishize women with disabilities, which is a direct use of AI technology. The harms include misinformation, exploitation, and violation of rights, which fall under harm to communities and violations of human rights. Since the harm is occurring and the AI system's use is central to causing this harm, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

AI used to make 'fetishised' images of disabled women

2026-03-27
KTBS
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used to create manipulated images that sexualize and fetishize disabled women, which is a direct use of AI technology. The harms include misinformation, exploitation, and offensive portrayals that affect the dignity and rights of disabled people, constituting harm to communities and a violation of rights. Since these harms are occurring and linked directly to the AI-generated content, this qualifies as an AI Incident under the framework.
Thumbnail Image

AI used to make 'fetishised' images of disabled women

2026-03-27
The Anniston Star
Why's our monitor labelling this an incident or hazard?
The AI system is used to create sexualized images of disabled women, which constitutes harm to communities through the spread of fetishized and deceptive content. The images are AI-generated and not properly labeled, which misleads viewers and contributes to harm. This fits the definition of an AI Incident as the AI system's use has directly led to harm to communities and a violation of rights related to dignity and respect.