AI-Generated Disturbing and Misleading Images Flood Facebook

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

AI-generated images, including disturbing depictions of suffering children and surreal memes like 'Shrimp Jesus,' are being widely circulated on Facebook. These images, often unlabeled and used by spammers and scammers for engagement, have caused distress and spread misinformation, with Facebook criticized for failing to address or label the harmful content.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves AI systems generating images that are used maliciously by spam accounts to manipulate social media engagement and spread misinformation. This has led to harm to communities by polluting information spaces with AI-generated spam and misleading content, which fits the definition of an AI Incident due to the realized harm to communities and the pivotal role of AI in generating the harmful content. The article details ongoing harm rather than just potential risk or complementary information, so it qualifies as an AI Incident.[AI generated]
AI principles
Transparency & explainabilityAccountabilitySafetyRobustness & digital securityHuman wellbeingRespect of human rightsDemocracy & human autonomy

Industries
Media, social platforms, and marketingDigital security

Affected stakeholders
General public

Harm types
PsychologicalReputationalPublic interest

Severity
AI incident

Business function:
Marketing and advertisement

AI system task:
Content generation

In other databases

Articles about this incident or hazard

Thumbnail Image

Facebook's Surreal 'Shrimp Jesus' Trend, Explained

2024-04-28
Forbes
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating images that are used maliciously by spam accounts to manipulate social media engagement and spread misinformation. This has led to harm to communities by polluting information spaces with AI-generated spam and misleading content, which fits the definition of an AI Incident due to the realized harm to communities and the pivotal role of AI in generating the harmful content. The article details ongoing harm rather than just potential risk or complementary information, so it qualifies as an AI Incident.
Thumbnail Image

Alarm raised over bizarre images circulating on Facebook

2024-04-29
The Independent
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating images that are spread on social media platforms, with some images unlabeled and misleading users. The use of AI-generated content by spammers and scammers to manipulate audience growth constitutes indirect harm to communities through misinformation and deception. Although no physical harm is reported, the misleading nature of the content and its potential to distort social interactions and trust qualifies as harm to communities. Therefore, this qualifies as an AI Incident due to realized harm caused by the use of AI systems.
Thumbnail Image

Facebook pushing AI-generated images of starving, drowning, bruised and mutilated children into users' feeds

2024-05-01
Boing Boing
Why's our monitor labelling this an incident or hazard?
AI-generated images of mutilated and dying children are being actively pushed into users' feeds, causing harm to communities by spreading distressing and potentially misleading content. The AI system's development and use in generating these images directly leads to harm. Facebook's failure to effectively address or label this content further contributes to the ongoing harm. Therefore, this event meets the criteria for an AI Incident as the AI system's use has directly led to harm to communities.