
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
Hungarian authorities and support organizations warn of the growing use of AI-powered deepfake and nudifying apps that generate fake nude images, including of children. These AI-generated images are used for sexual abuse, blackmail, and psychological harm, prompting calls for vigilance and international concern over the technology's misuse.[AI generated]
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (generative AI and deepfake technology) used to create realistic non-consensual explicit images, which directly cause harm to individuals' rights and dignity, particularly women and children. The harms include violations of human rights and potential criminal exploitation, fulfilling the criteria for an AI Incident. The article reports that millions of such images have been generated and distributed, with documented cases of associated criminal behavior, confirming realized harm rather than just potential risk.[AI generated]