
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
AI-generated images, including disturbing depictions of suffering children and surreal memes like 'Shrimp Jesus,' are being widely circulated on Facebook. These images, often unlabeled and used by spammers and scammers for engagement, have caused distress and spread misinformation, with Facebook criticized for failing to address or label the harmful content.[AI generated]
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating images that are used maliciously by spam accounts to manipulate social media engagement and spread misinformation. This has led to harm to communities by polluting information spaces with AI-generated spam and misleading content, which fits the definition of an AI Incident due to the realized harm to communities and the pivotal role of AI in generating the harmful content. The article details ongoing harm rather than just potential risk or complementary information, so it qualifies as an AI Incident.[AI generated]