
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
Three men in Japan were arrested for using AI deepfake technology to create and distribute fake pornographic videos featuring the faces of well-known female celebrities. The videos, widely circulated online for profit, caused reputational harm to at least six actresses, prompting police action and ongoing efforts to curb such AI-generated abuse.[AI generated]
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (deepfake technology) used to produce synthetic videos that harm individuals' reputations, constituting a violation of rights under applicable law. The harm has already occurred as evidenced by police arrests and ongoing illegal distribution of such content. Therefore, this qualifies as an AI Incident due to the direct link between AI-generated content and harm to individuals' rights and reputations.[AI generated]