
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
Networks of spammers have used AI to generate and spread fake Holocaust images and stories on Facebook, causing emotional distress to survivors and families, distorting historical facts, and disrespecting victims. These AI-generated posts, often monetized for profit, have drawn criticism from Holocaust preservation groups and raised concerns about misinformation.[AI generated]
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating false Holocaust images and narratives that are disseminated widely on social media, causing harm to communities by distorting historical facts and disrespecting victims. This is a direct harm caused by the use of AI-generated content. The monetisation aspect incentivizes the spread of such harmful content. Therefore, this qualifies as an AI Incident due to realized harm to communities and violation of rights through misinformation and disrespectful content.[AI generated]