
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
Ten male minors in Valladolid, Spain, are on trial for using AI to generate and distribute pornographic images by placing classmates' faces onto nude bodies. The AI-generated images were shared without consent, leading to charges of child pornography and moral harm, and resulting in legal and psychological consequences for the victims.[AI generated]
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system to generate harmful content (pornographic images) that directly caused harm to individuals (moral and psychological harm to minors). The use of AI was central to the creation and dissemination of this content, leading to legal action and sanctions. This meets the criteria for an AI Incident because the AI system's use directly led to violations of rights and harm to individuals.[AI generated]