Generative AI Drives Surge in Sophisticated Phishing Attacks Worldwide

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Cybercriminals are increasingly using generative AI to create highly targeted phishing attacks, resulting in hundreds of millions of attempts globally in 2024. These AI-powered scams employ personalized lures, deepfakes, and fake AI services, leading to widespread fraud, data breaches, and financial harm, especially in sectors like technology and finance.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly states that generative AI is being used by attackers to create personalized lures, deepfake content, and fake AI services to conduct phishing attacks that have already resulted in over 159 million hits in 2024. These attacks have caused direct harm by compromising users and organizations, including critical sectors like technology, finance, and services. The use of AI in this context is central to the harm, fulfilling the criteria for an AI Incident as the AI system's use has directly led to violations of rights and harm to communities through cybercrime.[AI generated]
AI principles
AccountabilitySafetyPrivacy & data governanceRespect of human rightsTransparency & explainabilityDemocracy & human autonomyHuman wellbeing

Industries
Digital securityFinancial and insurance services

Affected stakeholders
ConsumersBusiness

Harm types
Economic/PropertyHuman or fundamental rights

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

India ranks second globally and leads APJ in phishing attacks - Zscaler studyETCISO

2025-07-22
ETCISO.in
Why's our monitor labelling this an incident or hazard?
The article explicitly states that generative AI is being used by attackers to create personalized lures, deepfake content, and fake AI services to conduct phishing attacks that have already resulted in over 159 million hits in 2024. These attacks have caused direct harm by compromising users and organizations, including critical sectors like technology, finance, and services. The use of AI in this context is central to the harm, fulfilling the criteria for an AI Incident as the AI system's use has directly led to violations of rights and harm to communities through cybercrime.
Thumbnail Image

India ranks second globally and leads APJ in phishing attacks: Zscaler study

2025-07-22
Analytics Insight
Why's our monitor labelling this an incident or hazard?
The article explicitly states that cybercriminals are using generative AI to launch targeted phishing attacks, which are a form of harm to individuals and organizations (harm to communities and property). The AI system's use in creating personalized lures and evading defenses directly contributes to these harms. Therefore, this event meets the criteria for an AI Incident as the AI system's use has directly led to harm.
Thumbnail Image

Researchers Found Nearly 600 Incidents of AI Fraud

2025-07-23
Security Magazine
Why's our monitor labelling this an incident or hazard?
The report explicitly identifies nearly 600 incidents of generative AI fraud involving impersonation of AI platforms, which constitutes direct harm to individuals through phishing and fraud. The AI system's use in generating convincing fraudulent content and impersonations is central to the harm caused. Therefore, this qualifies as an AI Incident because the development and use of generative AI systems have directly led to realized harm through fraud and phishing attacks.
Thumbnail Image

India ranks second globally and leads APJ in phishing attacks - Zscaler study - APN News

2025-07-22
apnnews.com
Why's our monitor labelling this an incident or hazard?
The report explicitly states that cybercriminals are using generative AI to launch targeted phishing attacks that have already resulted in over 80 million phishing attempts in India alone, with millions more globally. These attacks involve AI-generated content designed to evade detection and manipulate victims, leading to realized harms such as fraud, data breaches, and financial losses. The AI system's use in generating phishing content and enabling these attacks directly leads to harm, fulfilling the criteria for an AI Incident. The article does not merely warn of potential harm but documents ongoing, large-scale attacks involving AI, thus qualifying as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Phishing simulations: What works and what doesn't - IT Security News

2025-07-23
IT Security News - cybersecurity, infosecurity news
Why's our monitor labelling this an incident or hazard?
While the article implies that AI may be used to create more convincing phishing emails, it does not report a specific AI Incident or AI Hazard. There is no direct or indirect harm described from an AI system's use or malfunction, nor a clear plausible future harm event. The content is general information about phishing and AI's role in it, fitting the category of Complementary Information as it enhances understanding of AI's impact on cybersecurity threats.
Thumbnail Image

Phishing in the AI Era: 6 Tips to Build Resiliency

2025-07-23
Cofense
Why's our monitor labelling this an incident or hazard?
The article highlights the plausible future harm posed by AI-powered phishing campaigns, which could lead to significant security breaches and harm to organizations. Since no specific harm or incident is described as having occurred, and the focus is on preparedness and mitigation strategies, this qualifies as an AI Hazard. It identifies a credible risk stemming from the use of generative AI by attackers to enhance phishing attacks, thus fitting the definition of an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

New Trends in Phishing Attacks Emerges as AI Reshaping the Tool Used by Cybercriminals

2025-08-14
Cyber Security News
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems in the development and execution of phishing attacks that have already caused harm by deceiving victims and facilitating fraud. The AI systems are directly involved in generating convincing messages, deepfake content, and evading detection, which are all contributing factors to realized harm to individuals and communities. Therefore, this qualifies as an AI Incident due to the direct link between AI use and harm caused by cybercrime.
Thumbnail Image

Do Not Post These Photos On Your Facebook Or Instagram Account

2025-08-13
Forbes
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI systems used by malicious actors to conduct sophisticated phishing and social engineering attacks. Although no actual incident of harm is described, the use of AI to harvest data and craft convincing attacks poses a credible risk of harm to individuals' privacy, security, and potentially to organizations' confidential information. Therefore, this qualifies as an AI Hazard because the AI system's use could plausibly lead to AI Incidents involving harm to persons and communities through fraud, data theft, and privacy violations.
Thumbnail Image

AI-powered phishing attacks are on the rise and getting smarter - here's how to stay safe

2025-08-14
TechRadar
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (generative AI, large language models, AI-driven bots, deepfake generation) in the malicious use context to conduct phishing attacks. These attacks have directly led to harm to individuals by stealing sensitive information and causing financial and emotional damage, fitting the definition of an AI Incident. The article describes actual harm occurring (phishing attacks with AI involvement), not just potential harm or general AI developments, so it is classified as an AI Incident.
Thumbnail Image

Kaspersky highlights biometric and signature risks with attempts increasing by 21.2% in the UAE

2025-08-14
Zawya.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (large language models, deepfake generation, AI-driven bots) in the development and use of phishing attacks that have directly led to harm by stealing sensitive biometric and signature data, enabling unauthorized access and fraud. The harms include violations of privacy and security, financial and reputational damage to individuals and businesses, which fall under harm to persons and communities. The article describes realized harm from these AI-powered phishing campaigns, not just potential risks, thus qualifying as an AI Incident.
Thumbnail Image

AI Is Making Phishing Scams Harder to Spot, Experts Warn

2025-08-14
Android Headlines
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI systems are being used to generate phishing scams that bypass traditional detection methods, including AI-generated deepfakes for audio and video impersonation. These AI-enabled attacks have directly led to increased successful phishing attempts, causing harm to individuals by tricking them into compromising security or transferring money. This meets the criteria for an AI Incident because the AI system's use has directly led to harm to people (harm to health or property through fraud and deception).
Thumbnail Image

AI-Driven Phishing Scams Surge 21.5% in the Middle East, Kaspersky Warns | THE DAILY TRIBUNE | KINGDOM OF BAHRAIN

2025-08-14
DT News
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (large language models, deepfake, voice cloning) in the development and use of phishing scams that have directly led to harm by deceiving victims and stealing sensitive information. The article describes realized harm (a 21.5% increase in phishing attempts and millions of blocked malicious clicks), indicating that the AI systems' involvement has directly caused harm to individuals and communities. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Five AI-driven tactics driving a shift in phishing attacks - Businessday NG

2025-08-15
Businessday NG
Why's our monitor labelling this an incident or hazard?
The article explicitly details how AI systems (large language models, deepfake and voice cloning technologies) are being weaponized by cybercriminals to conduct phishing attacks that have directly led to harm such as fraud, theft of biometric data, and financial and reputational damage. The involvement of AI is central to the new tactics described, and the harms are clearly articulated and ongoing, meeting the criteria for an AI Incident. The use of AI-generated content to deceive victims and bypass security measures directly contributes to the harm, fulfilling the definition of an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Kaspersky highlights biometric, signature risks with attempts up by 21.2% in UAE | TahawulTech.com

2025-08-15
TahawulTech.com
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the use of AI systems (large language models, AI-generated deepfakes, voice cloning, AI bots) in phishing attacks that have already caused harm by stealing sensitive biometric and signature data, enabling unauthorized access and fraud. The harms are direct and realized, including theft of immutable biometric data and signatures, which pose long-term risks. The AI systems are central to the sophistication and success of these attacks, making this an AI Incident rather than a hazard or complementary information.
Thumbnail Image

AI-powered tactics are transforming phishing attacks | TahawulTech.com

2025-08-14
TahawulTech.com
Why's our monitor labelling this an incident or hazard?
The article clearly identifies AI systems (large language models, AI-generated deepfakes, voice cloning, AI bots) as tools used by attackers to perpetrate phishing attacks that have already resulted in harm, including theft of biometric data and fraudulent transactions. This meets the definition of an AI Incident because the AI system's use has directly led to harm (violation of rights, financial and reputational damage). The detailed description of ongoing attacks and their consequences confirms realized harm rather than just potential risk. Hence, the classification as AI Incident is appropriate.