
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
AI-generated fake videos and images have flooded Nepal's election campaigns, spreading misinformation and hate speech. This disinformation, amplified on social media, is misleading voters and undermining democratic processes, particularly in a context of low digital literacy and limited monitoring expertise.[AI generated]
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated images and videos being used to spread false information and hate speech during the election, with authorities already handling cases related to this disinformation. The harm is realized as misinformation is misleading voters and undermining democracy, which constitutes harm to communities and a violation of democratic rights. Therefore, this qualifies as an AI Incident due to the direct role of AI systems in causing significant societal harm.[AI generated]