
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
AI systems, including xAI's Grok, have enabled the mass creation and dissemination of sexualized and nonconsensual deepfake images, leading to reputational, emotional, and psychological harm, especially among minors. Social media platforms have increased takedown efforts, but the rapid spread of deepfakes continues to pose significant societal and legal challenges globally.[AI generated]
Why's our monitor labelling this an incident or hazard?
The article explicitly links the rise of AI-generated deepfake content to societal harm, including misinformation and potential damage to individuals and public discourse. The AI system's use in generating deepfakes has directly led to these harms, fulfilling the criteria for an AI Incident. The platforms' increased takedown efforts are responses to an ongoing incident rather than the main focus, so the article is not primarily about complementary information. The harm is realized and ongoing, not merely potential, so it is not an AI Hazard. Hence, the classification is AI Incident.[AI generated]