
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
AI-generated deepfake videos are being deployed in US political campaigns, notably by the National Republican Senatorial Committee, to misrepresent candidates and spread misinformation. These realistic ads are eroding voter trust and undermining democratic processes, with limited regulation and safeguards in place.[AI generated]
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly used to create deepfake videos that misrepresent political candidates, leading to misinformation and voter deception. This misinformation harms communities by undermining democratic integrity and voter trust, fulfilling the criteria for harm to communities. The harm is realized and ongoing, not merely potential. Therefore, this qualifies as an AI Incident due to the direct role of AI in causing significant societal harm through misinformation in political campaigns.[AI generated]