
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
Researchers from Barracuda, Columbia University, and the University of Chicago found that over half of all spam and malicious emails are now generated by AI, particularly large language models. This shift, accelerating since ChatGPT's launch, has made phishing and scam emails more convincing and harder to detect, increasing cybercrime risks.[AI generated]
Why's our monitor labelling this an incident or hazard?
The article explicitly states that over half of spam emails are now generated by large language models, which are AI systems. These AI-generated emails are more sophisticated and harder to detect, increasing the risk and occurrence of harm through spam and phishing attacks. This directly leads to harm to communities and individuals by enabling scams and deceptive practices. Hence, it meets the criteria for an AI Incident due to realized harm caused by AI-generated malicious content.[AI generated]