
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
A challenge on the 4Chan forum encouraged users to bypass AI safeguards and create explicit deepfake images of Taylor Swift using generative AI tools. These non-consensual images were widely disseminated on social media, causing reputational and emotional harm, and highlighting the misuse of AI for generating harmful content.[AI generated]
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI generative tools to create manipulated images (deepfakes) that impersonate a real person without consent, leading to harm to the individual's reputation and privacy, which constitutes a violation of rights. The AI system's use directly led to the harm through the generation and distribution of false and damaging content. Therefore, this qualifies as an AI Incident under the framework, as the harm to the individual and community is realized and directly linked to the AI system's use.[AI generated]