Cybercriminals Weaponize AI for Global Phishing and Deepfake Scams

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Cybercriminals are increasingly using AI to create sophisticated phishing emails and deepfake videos, leading to significant financial and reputational harm worldwide. Interpol's cybercrime unit in Singapore is actively combating these AI-driven threats, which target individuals, corporations, and governments for billions of dollars in losses.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions AI systems being weaponized by cybercriminals to create sophisticated phishing emails and deepfake videos that have led to scams and financial theft, which are harms to individuals and communities. The involvement of AI in these cybercrimes is direct and central to the harm caused. The article also discusses Interpol's response to these incidents, but the primary focus is on the ongoing AI-enabled cybercrime causing actual harm. Hence, this is an AI Incident rather than a hazard or complementary information.[AI generated]
AI principles
Privacy & data governanceRobustness & digital security

Industries
Digital securityFinancial and insurance services

Affected stakeholders
ConsumersBusiness

Harm types
Economic/PropertyReputational

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Interpol backroom warriors fight cyber criminals 'weaponising' AI

2026-02-15
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems being weaponized by cybercriminals to create sophisticated phishing emails and deepfake videos that have led to scams and financial theft, which are harms to individuals and communities. The involvement of AI in these cybercrimes is direct and central to the harm caused. The article also discusses Interpol's response to these incidents, but the primary focus is on the ongoing AI-enabled cybercrime causing actual harm. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Interpol backroom warriors fight cyber criminals 'weaponising' AI

2026-02-15
Yahoo News
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems being weaponized by cybercriminals to perpetrate phishing, deepfake scams, and other cyberattacks that have caused significant financial and reputational harm to victims globally. The involvement of AI in generating convincing fake content and automating attacks is central to the harm described. Interpol's operations to dismantle malicious infrastructures and arrest criminals further confirm that harm has occurred. Hence, the event meets the criteria for an AI Incident as the AI system's use has directly led to harm to individuals and communities.
Thumbnail Image

Emails d'hameçonnage, faux enregistrements audios, vidéos truquées... Comment l'IA profite-t-elle aux cybercriminels ?

2026-02-15
CNEWS
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (e.g., generative AI for creating fake emails, audio, and video) by cybercriminals to perpetrate phishing and fraud, which directly harms individuals financially and socially. This fits the definition of an AI Incident because the AI system's use has directly led to harm (a). The article does not merely warn about potential future harm but describes ongoing criminal activity and its consequences. Therefore, it is classified as an AI Incident.
Thumbnail Image

Interpol face à l'IA, arme redoutable des cybercriminels

2026-02-15
DH.be
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems used maliciously by cybercriminals, which have directly led to harms such as financial fraud, identity theft, and victimization of individuals. These harms fall under violations of rights and harm to communities. The article describes realized harms caused by AI-enabled cybercrime tactics, not just potential risks. Therefore, this qualifies as an AI Incident. The article also discusses responses and countermeasures but the primary focus is on the ongoing harms caused by AI misuse in cybercrime.
Thumbnail Image

Interpol face à l'IA, arme redoutable des cybercriminels

2026-02-15
Courrier international
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems by cybercriminals to perpetrate phishing scams, identity theft, and other cybercrimes that have already caused harm to victims. The AI systems are explicitly mentioned as tools for generating realistic fake content and more convincing scams, which directly lead to harm to people and communities. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to realized harms such as financial loss and identity fraud. The article does not merely discuss potential future harm or general AI developments but focuses on ongoing criminal activities using AI.
Thumbnail Image

Interpol backroom warriors fight cyber criminals 'weaponising' AI

2026-02-15
eNCAnews
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI being weaponized by cybercriminals to produce phishing emails and fake videos, which are forms of AI-generated content used to perpetrate scams and impersonations. These activities cause direct harm to victims, including financial losses and potential violations of rights. The involvement of AI in the development and use of these malicious tools directly leads to harm, fulfilling the criteria for an AI Incident.
Thumbnail Image

Interpol backroom warriors fight cyber criminals 'weaponizing' AI

2026-02-15
Arab News
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI technology being weaponized by cybercriminals to create convincing fake videos and messages that endorse scam investments and phishing attacks. These AI-enabled attacks have directly caused financial harm to victims worldwide, fulfilling the criteria for an AI Incident. The involvement of AI in the development and use of these malicious tools is clear, and the harms (financial loss, deception) are realized. The article focuses on the ongoing harm and Interpol's response, not just potential future risks or general AI developments, so it is not a hazard or complementary information but an AI Incident.
Thumbnail Image

Interpol backroom warriors fight cybercriminals 'weaponizing' AI

2026-02-15
The Manila times
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems by cybercriminals to perpetrate phishing scams, deepfake impersonations, and other cyberattacks that have caused realized harm to victims, including financial losses and data theft. This constitutes an AI Incident because the AI system's use has directly led to harm to individuals and communities. The article also discusses ongoing operations to counter these harms, but the primary focus is on the realized harms caused by AI-enabled cybercrime.
Thumbnail Image

Interpol backroom warriors fight cyber criminals 'weaponizing' AI

2026-02-15
Japan Today
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems being weaponized by cybercriminals to produce highly convincing phishing emails and deepfake videos, which have directly led to financial losses and victimization of individuals and organizations. The involvement of AI in the development and use of these malicious tools is clear, and the harms described (financial theft, scams, impersonation) fall under harm to communities and individuals. Interpol's response and operations are described as countermeasures but do not negate the fact that AI-enabled cybercrime is actively causing harm. Hence, this is an AI Incident.
Thumbnail Image

Interpol backroom warriors fight AI cyber criminals

2026-02-15
Oman Observer
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems being weaponized by cybercriminals to perpetrate scams and cyberattacks that have caused actual harm, including financial losses and victimization of tens of thousands of people. The involvement of AI in generating phishing emails and deepfake videos is clear, and the resulting harm is direct and significant. Interpol's response and operations are described as mitigating these harms but do not negate the fact that AI-enabled cybercrime is ongoing and harmful. Hence, this qualifies as an AI Incident due to realized harm caused by AI misuse in cybercrime.
Thumbnail Image

Interpol backroom warriors fight cyber criminals 'weaponising' AI

2026-02-15
KTBS
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems being weaponized by cybercriminals to create convincing phishing emails and deepfake videos, which are used to perpetrate scams and cyberattacks. These activities have resulted in significant financial harm and victimization, fulfilling the criteria for harm to communities and individuals. The involvement of AI in the development and use stages of these malicious tools is clear, and the harm is realized, not just potential. Hence, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Interpol Backroom Warriors Fight Cyber Criminals 'Weaponizing' AI

2026-02-15
Asharq Al-Awsat English
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems being used by cybercriminals to create convincing phishing emails and deepfake videos, which have caused financial and reputational harm to victims globally. This constitutes direct harm to individuals and communities. The involvement of AI in these criminal activities and the resulting damages meet the criteria for an AI Incident. The article also discusses Interpol's response but the primary focus is on the realized harms caused by AI-enabled cybercrime.
Thumbnail Image

Interpol face à l'IA, arme redoutable des cybercriminels

2026-02-15
TV5MONDE
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly used by cybercriminals to perpetrate phishing, identity theft, and fraud, which directly harm individuals and communities. The article reports ongoing harms caused by AI-generated content and tools used maliciously, fitting the definition of an AI Incident. The involvement of AI in the development and use of these malicious tools is clear, and the harms (financial loss, identity theft, deception) are occurring. Therefore, this is an AI Incident.
Thumbnail Image

Interpol backroom warriors fight cyber criminals 'weaponising' AI

2026-02-15
RTL Today
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems being weaponized by cybercriminals to create sophisticated phishing emails and deepfake videos that facilitate scams and impersonations, causing financial harm and victimization. This constitutes direct harm to people and communities. The involvement of AI in the development and use of these malicious tools is clear, and the harms are ongoing and realized. Hence, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to significant harm.
Thumbnail Image

Interpol face à l'IA, arme redoutable des cybercriminels

2026-02-15
timeline
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used maliciously by cybercriminals to perpetrate phishing, identity theft, and scams, which directly harm individuals and communities. The article describes realized harms including financial losses, victimization, and criminal activity facilitated by AI-generated content and tactics. Therefore, this qualifies as an AI Incident because the development and use of AI systems have directly led to significant harms as defined in the framework (harm to persons and communities).
Thumbnail Image

Interpol backroom warriors fight cyber criminals 'weaponising' AI

2026-02-15
Head Topics
Why's our monitor labelling this an incident or hazard?
The event involves AI systems being used maliciously by cybercriminals to perpetrate scams and cyberattacks, which have resulted in realized harm such as financial losses and victimization of individuals and organizations. The article explicitly mentions AI-generated phishing emails and deepfake videos used to deceive victims, which constitutes direct involvement of AI in causing harm. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to harm to communities and property.
Thumbnail Image

AI & Cybercrime: Interpol's Singapore Hub Fights Global Threats - News Directory 3

2026-02-15
News Directory 3
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems as it discusses AI-powered cybercrime tools and their use by criminals. The harms described (phishing scams, disinformation, financial theft) are real and significant, and AI is a pivotal factor in enabling these harms. However, the article does not describe a specific new AI Incident event causing harm, nor does it focus on a plausible future harm scenario alone. Instead, it details Interpol's strategic and operational responses, partnerships, and ongoing efforts to combat AI-enabled cybercrime globally. This aligns with the definition of Complementary Information, as it provides supporting context and updates on societal and governance responses to AI-related harms rather than reporting a new incident or hazard.
Thumbnail Image

Interpol high-tech war rooms fighting cybercriminals - Taipei Times

2026-02-16
Taipei Times
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly used by cybercriminals to generate convincing phishing emails, deepfake videos, and hacking tools that have directly caused harm to individuals and organizations through scams and cyberattacks. The article details ongoing criminal use of AI leading to realized harm (financial losses, victimization), which fits the definition of an AI Incident. The involvement of AI in the development and use of these malicious tools and the resulting harms to communities and property (financial assets) is clear. Therefore, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Interpolface à l'IA, arme redoutable des cybercriminels

2026-02-16
timeline
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (e.g., generative AI for phishing emails, deepfake audio and video) by cybercriminals to commit crimes that harm people and communities. The harms are realized as these AI-generated materials are actively used in scams and identity theft, fitting the definition of an AI Incident due to violations of rights and harm to communities. The article focuses on the actual use and impact of AI in cybercrime, not just potential risks or responses, thus qualifying as an AI Incident.
Thumbnail Image

الانتربول ترصد من سنغافورة الجرائم السيبرانية و"تهديد" الذكاء الاصطناعي

2026-02-15
France 24
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems being used by criminals to create fraudulent emails, deepfake videos, and other cybercrime tools that have already caused significant financial and reputational harm. The involvement of AI in these criminal activities is direct and ongoing, fulfilling the criteria for an AI Incident. The article also describes the response by law enforcement but the primary focus is on the realized harms caused by AI-enabled cybercrime, not just potential or future risks or responses, so it is not Complementary Information or an AI Hazard.
Thumbnail Image

عمليات بمليارات الدولارات.. هكذا يواجه الإنتربول تحدي الاحتيال بـ"الذكاء الاصطناعي" | صحيفة الخليج

2026-02-15
صحيفة الخليج
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems being used by criminals to generate fraudulent emails, deepfake videos, and other scams that have caused financial losses amounting to billions of dollars. The involvement of AI in these criminal activities directly leads to harm to individuals and organizations (financial harm and identity theft). The article also discusses the use of AI-based tools by Interpol to analyze data and combat these crimes, confirming the central role of AI systems in both the harm and the response. Hence, this is a clear case of an AI Incident as defined by the framework, involving direct harm caused by AI-enabled malicious use.
Thumbnail Image

الإنتربول يرصد الجرائم السيبرانية و"تهديد" الذكاء الاصطناعي من سنغافورة

2026-02-15
موقع عرب 48
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems being used by criminals to generate fake videos and audio, conduct phishing and fraud, and cause financial harm to victims. These activities constitute realized harm to people and communities, fulfilling the criteria for an AI Incident. The involvement of AI in enabling these crimes is direct and central to the harm described. The article does not merely warn of potential future harm but reports ongoing criminal use of AI causing actual damage. Hence, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

من سنغافورة.. كيف ترصد "إنتربول" الجرائم السيبرانية و"تهديد" الذكاء الصناعي؟

2026-02-15
Alwasat News
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems being used by criminals to perpetrate cybercrime, including AI-generated fake videos and voice clones to commit fraud, which directly harms victims financially and through identity theft. The involvement of AI in these crimes is clear and ongoing, fulfilling the criteria for an AI Incident. The article also describes law enforcement's use of AI to combat these crimes, but the primary focus is on the realized harms caused by AI-enabled criminal activities. Therefore, the event is best classified as an AI Incident.
Thumbnail Image

الإنتربول: الذكاء الاصطناعي يحول أي هاتف إلى هدف محتمل - قناة دجلة الفضائية

2026-02-15
قناة دجلة الفضائية
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems being used by criminals and amateurs to commit cybercrimes including fraud and identity theft, which have caused real financial harm to victims worldwide. The AI involvement is in the use of AI-generated content (deepfakes) and AI tools to facilitate these crimes. This constitutes an AI Incident because the AI system's use has directly led to harm to people (financial harm) and communities (cybercrime impact). The article also discusses ongoing detection and prevention efforts but the primary focus is on the realized harm caused by AI-enabled cybercrime.
Thumbnail Image

Study Finds AI Is Fueling An Alarming Surge In Sophisticated Phishing Scams

2026-03-03
HotHardware
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI enables attackers to generate more convincing phishing emails, texts, and cloned voices, which directly contributes to an increase in successful scams and financial harm to victims. This meets the definition of an AI Incident because the use of AI in the scam's execution has directly led to harm to people (financial injury) and harm to communities (widespread fraud).
Thumbnail Image

Study Finds Phishing Scams Are on the Rise, Accelerated by AI

2026-03-02
CNET
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (generative AI tools) in the creation and execution of phishing scams that have caused real financial harm to people, as evidenced by the reported billions in losses and increased scam complaints. The AI's role is pivotal in enabling the sophistication and scale of these scams, which have materialized as actual harm. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to harm to persons (financial injury) and communities (widespread scam impact).
Thumbnail Image

How to Protect Yourself From the 10 Most Common AI Travel Scams

2026-03-03
Fodors Travel Guide
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to generate and deploy travel scams that have directly caused financial harm to many people. The AI systems' use in automating phishing and scam content is a clear example of AI involvement leading to harm (financial loss) to individuals, which fits the definition of an AI Incident under harm to communities or persons. Therefore, this is classified as an AI Incident.
Thumbnail Image

AI SCAMS: What they are and how to spot them

2026-03-03
WJLA
Why's our monitor labelling this an incident or hazard?
The event involves AI systems being used maliciously to perpetrate scams that have directly led to billions of dollars in losses, which constitutes harm to property and communities. The AI's role is pivotal in making the scams more convincing and harder to detect, thus directly contributing to the harm. Therefore, this qualifies as an AI Incident under the framework.
Thumbnail Image

'AI-generated scams becoming sophisticated' - The Nation Newspaper

2026-03-06
Latest Nigeria News, Nigerian Newspapers, Politics
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI-generated scams that are currently causing real harm by defrauding individuals and exploiting emotional trust. The AI systems involved generate personalized, convincing scam messages and deepfake audio/video, which have led to significant financial losses and emotional harm. This fits the definition of an AI Incident, as the AI system's use has directly led to harm to people (financial and emotional harm) and communities (widespread fraud). The article does not merely warn about potential future harm but reports ongoing, realized harm due to AI-enabled scams.
Thumbnail Image

Drummond warns of rising AI-Driven scams on National Slam the Scam Day

2026-03-05
KTUL
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI is being used by criminals to generate convincing fake content that leads to financial fraud and identity theft, causing direct harm to victims. The harms described include financial losses and impersonation scams, which are violations of rights and harm to communities. The AI systems' use in these scams is a direct contributing factor to the harm. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Digital Scams Surge Globally, Threatening Trust in the Expanding Digital Economy | Other

2026-03-05
Devdiscourse
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems by criminals to generate realistic deceptive content that directly leads to financial harm and loss of trust, which are harms to individuals and communities. The AI's role is pivotal in making scams more convincing and scalable, thus directly contributing to the harm. Therefore, this is an AI Incident rather than a hazard or complementary information, as the harm is realized and ongoing.
Thumbnail Image

With AI's Help, Fraudsters Are Targeting Smaller Banks

2026-03-06
U.S. News & World Report
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI tools are being used by fraudsters to enhance the scale, accuracy, and customization of scams targeting bank customers, particularly at smaller banks. This has led to a 1,700% spike in attacks and involves direct harm to individuals through attempted fraud and potential breaches of personal information. The AI system's use in enabling these scams constitutes direct involvement in causing harm, fitting the definition of an AI Incident due to violations of rights and harm to individuals and communities.
Thumbnail Image

Michigan AG: AI scams, fraud a growing threat

2026-03-06
The Mining Journal
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (deepfake audio and video generation) being used maliciously to commit fraud, which can cause harm to individuals (financial harm and emotional distress). However, the article does not describe a specific realized incident of harm but rather warns about the plausible and growing threat of such AI-enabled scams. Therefore, this qualifies as an AI Hazard because the development and use of AI in scams could plausibly lead to harm, even if no particular incident is detailed here.