
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
A man in Yunnan used AI tools to create and share fake videos depicting traffic accidents with casualties, falsely labeled as occurring in Dali. The AI-generated misinformation caused public fear and misled residents, prompting authorities to issue an administrative penalty and warn against online fabrication of emergencies.[AI generated]
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly used to generate false videos that misled people and caused social disruption, which constitutes harm to communities. The event describes realized harm caused by the AI-generated misinformation, meeting the criteria for an AI Incident. The involvement of AI in creating the false content is central to the harm caused, and the incident has been legally addressed by authorities.[AI generated]