
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
Kaspersky warns criminals increasingly use AI-generated deepfake videos and voice clones to bypass security controls, commit fraud, identity theft and corporate espionage. The underground market charges $300–$20,000 per minute for high-quality fakes. A Kaspersky study found 75% of Peruvians unfamiliar with deepfakes, heightening vulnerability to these attacks.[AI generated]
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems (deepfake technology using generative neural networks) that have been used to create convincing fake content leading to harms such as identity theft, fraud, disinformation, and reputational damage. These harms have materialized or are actively occurring, fulfilling the criteria for an AI Incident. The discussion of risks and recommendations supports the presence of actual harm rather than just potential harm. Therefore, the event is best classified as an AI Incident rather than a hazard or complementary information.[AI generated]