
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
AI-generated videos depicting violence and hate against LGBTQ+, Jewish, Muslim, and other minority groups have proliferated on social media, garnering significant engagement and applause. The spread of such content is causing real harm to vulnerable communities and raising concerns about the lack of effective safety regulations.[AI generated]
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (generative AI for video creation) that have directly led to harm by spreading hateful and violent content targeting vulnerable minority groups, which constitutes harm to communities and violations of rights. The harm is realized and ongoing, as evidenced by the widespread dissemination and the concerns raised by advocacy groups and experts. Therefore, this qualifies as an AI Incident. The article also discusses regulatory and governance responses, but the primary focus is on the harm caused by the AI-generated content itself, not just the responses, so it is not merely Complementary Information.[AI generated]