
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
Two teenage boys in Lancaster, Pennsylvania, used AI to create fake nude images of female classmates, causing emotional harm and violating their rights. The teens were sentenced to probation, community service, and restitution. The incident highlights the dangers of AI-enabled deepfake technology and prompts legislative attention.[AI generated]
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI to create manipulated images (deepfakes) of minors, which directly caused harm to the victims' mental health and violated their rights. The AI system's use here is central to the harm, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, and includes violations of rights and psychological injury, which are covered under the AI Incident definition.[AI generated]