
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
Cybercriminals are increasingly using generative AI to create highly targeted phishing attacks, resulting in hundreds of millions of attempts globally in 2024. These AI-powered scams employ personalized lures, deepfakes, and fake AI services, leading to widespread fraud, data breaches, and financial harm, especially in sectors like technology and finance.[AI generated]
Why's our monitor labelling this an incident or hazard?
The article explicitly states that generative AI is being used by attackers to create personalized lures, deepfake content, and fake AI services to conduct phishing attacks that have already resulted in over 159 million hits in 2024. These attacks have caused direct harm by compromising users and organizations, including critical sectors like technology, finance, and services. The use of AI in this context is central to the harm, fulfilling the criteria for an AI Incident as the AI system's use has directly led to violations of rights and harm to communities through cybercrime.[AI generated]