AI-Generated Books Spread Misinformation on Amazon After Charlie Kirk Assassination and on Mushroom Safety

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

AI-generated books rapidly appeared on Amazon following Charlie Kirk's assassination, spreading false details and fueling conspiracies. Separately, AI-authored mushroom-picking guides contained dangerous misinformation about poisonous mushrooms. Both incidents highlight how AI-generated content on Amazon can mislead the public and pose risks to health and social stability.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event explicitly involves AI systems (generative AI) used to create misleading books that were published and sold, leading to misinformation and conspiracy spread. This misinformation constitutes harm to communities, fulfilling the criteria for an AI Incident. The AI's role is pivotal as the rapid generation and publication of these books would not be possible without AI. The harm is realized, not just potential, as conspiracies have already spread. Therefore, this is classified as an AI Incident.[AI generated]
AI principles
AccountabilityTransparency & explainabilitySafetyRespect of human rightsHuman wellbeingDemocracy & human autonomy

Industries
Media, social platforms, and marketingConsumer services

Affected stakeholders
ConsumersGeneral public

Harm types
Public interestPhysical (injury)Physical (death)

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Amazon removes likely AI-generated books about Charlie Kirk that sparked conspiracies

2025-09-12
Mashable SEA
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (generative AI) used to create misleading books that were published and sold, leading to misinformation and conspiracy spread. This misinformation constitutes harm to communities, fulfilling the criteria for an AI Incident. The AI's role is pivotal as the rapid generation and publication of these books would not be possible without AI. The harm is realized, not just potential, as conspiracies have already spread. Therefore, this is classified as an AI Incident.
Thumbnail Image

AI e-books flood Amazon, ruining the customer experience

2025-09-14
Good e-Reader
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating e-books that contain misinformation about poisonous mushrooms, which has already led to removal of harmful content after media coverage. The misinformation could plausibly cause injury or death if readers follow unsafe advice, fulfilling harm to health. Additionally, the flood of AI-generated low-quality books harms the self-publishing industry and consumer experience, constituting harm to communities. The AI system's use in generating these books is central to the harm. Thus, this is an AI Incident as the AI system's use has directly and indirectly led to harm.
Thumbnail Image

Apparent AI-generated books on Charlie Kirk's assassination flood Amazon - Muvi TV

2025-09-12
Muvi Television Homepage - Latest Local News, Sports News, Business News & Entertainment
Why's our monitor labelling this an incident or hazard?
The event clearly involves AI systems generating content (books) that directly led to harm in the form of misinformation and social disruption, which qualifies as harm to communities. The AI-generated books spread false information about a sensitive and tragic event, which can exacerbate social tensions and mislead the public. The presence of AI-generated content and its role in causing these harms meets the criteria for an AI Incident. The article also discusses Amazon's mitigation efforts, but the primary focus is on the harm caused by the AI-generated books themselves.
Thumbnail Image

Apparent AI-generated books on Charlie Kirk's assassination flood Amazon

2025-09-11
Straight Arrow News
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating and publishing content that misrepresents facts about a real assassination, which is a clear harm to communities through misinformation and social disruption. The AI-generated books were actively sold and consumed before removal, indicating realized harm rather than just potential. The AI system's use in this context directly led to the harm described. Therefore, this qualifies as an AI Incident under the framework, as it involves the use of AI systems leading to harm to communities through misinformation and social disruption.