
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
AI systems have been used to create and manipulate sexualised, fetishised images of women with disabilities and genetic conditions, including Down syndrome, vitiligo, and albinism. British charities and disability advocates condemned the trend, citing exploitation, misinformation, and harm to vulnerable communities. The deceptive images are often not labelled as AI-generated.[AI generated]
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating manipulated images that sexualize and fetishize women with disabilities, which directly leads to harm by spreading misinformation and offensive content. The involvement of AI in creating deceptive and harmful images that exploit vulnerable groups fits the definition of an AI Incident, as it causes violations of human rights and harm to communities. The harm is realized and ongoing, not merely potential, and the AI's role is pivotal in producing and disseminating this content.[AI generated]