
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
AI-generated books rapidly appeared on Amazon following Charlie Kirk's assassination, spreading false details and fueling conspiracies. Separately, AI-authored mushroom-picking guides contained dangerous misinformation about poisonous mushrooms. Both incidents highlight how AI-generated content on Amazon can mislead the public and pose risks to health and social stability.[AI generated]
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (generative AI) used to create misleading books that were published and sold, leading to misinformation and conspiracy spread. This misinformation constitutes harm to communities, fulfilling the criteria for an AI Incident. The AI's role is pivotal as the rapid generation and publication of these books would not be possible without AI. The harm is realized, not just potential, as conspiracies have already spread. Therefore, this is classified as an AI Incident.[AI generated]