
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
AI-powered deepfakes are increasingly used for identity theft, fraud, misinformation, and non-consensual pornography, causing financial, reputational, and psychological harm. These incidents undermine trust in media, disrupt markets, and challenge legal and security systems, prompting urgent development of AI-based detection and regulatory responses.[AI generated]
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems (deep learning algorithms generating deepfakes) being used to create false videos and audio that have caused real harm, including political misinformation, extortion, and reputational damage. These harms correspond to violations of rights and harm to communities. The article also references specific cases and societal impacts, confirming that harm has occurred, not just potential harm. Hence, this qualifies as an AI Incident rather than a hazard or complementary information.[AI generated]