AI-Driven Impersonation Scams Cause Record Crypto Losses in 2025

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

In 2025, scammers used AI tools, including deepfake voices and face-swapping, to conduct impersonation scams, leading to a 1,400% surge in cases and $14 billion in crypto losses globally. Chainalysis identified these AI-enabled tactics as a major driver of increased fraud and financial harm.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions the use of AI technologies like voice cloning and deepfakes in scams that have already caused substantial financial harm to victims. The AI systems' use in generating fake identities and messages was pivotal in enabling these impersonation scams, which led to billions of dollars in losses. Therefore, this qualifies as an AI Incident because the AI system's use directly led to significant harm to people (financial harm) and communities (crypto user communities).[AI generated]
AI principles
AccountabilityRobustness & digital securitySafetyTransparency & explainability

Industries
Financial and insurance services

Affected stakeholders
Consumers

Harm types
Economic/Property

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Crypto Users Hit By 1,400% Surge In Impersonation Scams, Research Shows

2026-01-14
Bitcoinist.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI technologies like voice cloning and deepfakes in scams that have already caused substantial financial harm to victims. The AI systems' use in generating fake identities and messages was pivotal in enabling these impersonation scams, which led to billions of dollars in losses. Therefore, this qualifies as an AI Incident because the AI system's use directly led to significant harm to people (financial harm) and communities (crypto user communities).
Thumbnail Image

Crypto scam trends: AI impersonation reshapes crypto fraud

2026-01-14
The Cryptonomist
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly used to generate convincing impersonations and synthetic content that facilitate large-scale crypto scams. These scams have directly caused substantial financial harm to individuals, fulfilling the criteria for an AI Incident. The AI's role is pivotal in enabling the impersonation tactics that have reshaped crypto fraud, leading to realized harm rather than just potential risk. Therefore, this is classified as an AI Incident due to the direct link between AI-enabled impersonation and significant financial harm.
Thumbnail Image

AI helped drive increase in crypto scam losses to $17bn in 2025

2026-01-14
Finextra Research
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI is leveraged by fraudsters to conduct more sophisticated and profitable scams, resulting in billions of dollars lost by victims. This constitutes direct harm to people through financial injury, fulfilling the criteria for an AI Incident. The involvement of AI in the scam operations is clear and the harm is realized, not just potential. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Crypto scammers and fraudsters stole a record $17 billion last year

2026-01-14
Sherwood News
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI and deepfake technology in cryptocurrency scams that have caused large-scale financial harm. The AI systems are used maliciously to impersonate legitimate organizations and increase scam effectiveness, directly resulting in harm to victims. This fits the definition of an AI Incident as the AI system's use has directly led to harm (financial theft) to people and communities. Therefore, the event is classified as an AI Incident.
Thumbnail Image

Revolut to tackle impersonation scams

2026-01-13
Finextra Research
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved as the feature uses AI and machine learning to detect deepfake voice scams and risky transactions. The event focuses on the use of AI to prevent harm (financial scams) rather than harm occurring. Since no actual harm is reported but the system addresses a critical vulnerability that could plausibly lead to financial harm, this qualifies as a measure addressing an AI Hazard. However, because the article does not report any realized harm or incident caused by AI misuse or malfunction, it is not an AI Incident. The main focus is on the deployment of AI-based protective technology and the evolving threat landscape, which is complementary to understanding AI risks and responses but does not itself describe a new incident or hazard event causing harm or plausible harm beyond the general threat context. Therefore, the event is best classified as Complementary Information.
Thumbnail Image

Revolut launches feature to prevent scams as it warns of increased use of AI by fraudsters

2026-01-13
Irish Examiner
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated deepfake voices used by fraudsters to scam users, which constitutes an AI Incident in the background (harm to users through fraud). However, the main focus of the article is on Revolut's launch of a protective AI feature to prevent such scams, representing a societal and technical response to the problem. The event does not describe a new AI Incident or AI Hazard but rather a mitigation measure and awareness of existing AI-enabled fraud. Therefore, it fits the definition of Complementary Information, providing context and updates on responses to AI harms.
Thumbnail Image

Chainalysis Says AI Tools Helped Drive Crypto Scam Losses to $14 Billion in 2025 | PYMNTS.com

2026-01-13
PYMNTS.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI tools enabling fraudsters to conduct more effective and profitable scams, including impersonation using deepfake voices and face-swapping. This use of AI has directly caused financial losses to victims, which is a clear harm to individuals and communities. The AI systems are central to the scam operations, making this an AI Incident as per the definitions provided. The harm is realized, not just potential, and the AI system's use is pivotal in causing the harm.
Thumbnail Image

AI and Impersonation Crypto Scams Experience Record Growth in 2025

2026-01-14
Cointelegraph
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI is used by scammers to increase the profitability and volume of impersonation scams, which have directly led to financial harm to victims. The AI systems are part of the scam operations, making the scams more persuasive and efficient, thus directly causing harm. This fits the definition of an AI Incident because the AI system's use in the scam has directly led to harm to people (financial loss) and breaches of legal rights (fraud).
Thumbnail Image

Impersonation Fraud Drives Record $17bn in Crypto Losses

2026-01-14
Infosecurity Magazine
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI tooling is playing a growing role in crypto fraud, increasing the scale and effectiveness of scams, which have caused record financial losses. The harm is direct and materialized, involving impersonation fraud and social engineering enhanced by AI. The AI systems' use in enabling these scams meets the criteria for an AI Incident, as the AI's involvement has directly led to significant harm to people (financial loss) and communities (widespread fraud).
Thumbnail Image

AI, Impersonations Drove Crypto Scam Losses to Record $17 Billion in 2025: Chainalysis - Decrypt

2026-01-14
Decrypt
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI systems such as deepfake technology and large language models are used to create convincing impersonations that facilitate scams causing substantial financial losses. The harms include direct financial injury to victims and broader societal harms linked to organized crime and human trafficking. The AI systems' use is central to the scale and effectiveness of these scams, fulfilling the criteria for an AI Incident due to direct harm caused by AI-enabled malicious use.
Thumbnail Image

Crypto Holders Lost $17,000,000,000 to Fraudsters in 2025 As Impersonation Scams Explode: Chainalysis - The Daily Hodl

2026-01-15
The Daily Hodl
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI-enabled attacks and impersonation scams have directly caused substantial financial harm to cryptocurrency holders, with concrete figures on losses and the role of AI in enabling these scams. The involvement of AI systems in the use of deepfakes and automated phishing to deceive victims and extract funds meets the definition of an AI Incident, as the AI system's use has directly led to harm to property and communities. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

How AI Crypto Scams Pushed On Chain Fraud Toward $17B in 2025

2026-01-15
CRYPTONEWSBYTES.COM - Latest Crypto News, Blockchain News, blockchain patents, crypto price analysis
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (generative AI, deepfakes, language models) in the execution of crypto scams that have directly caused large-scale financial harm to victims, fulfilling the criteria for an AI Incident. The harms include significant financial losses to individuals and communities, as well as violations of trust and potential breaches of legal protections. The article details how AI systems are central to the scale, sophistication, and profitability of these scams, indicating direct causation of harm. Although mitigation efforts are mentioned, the primary focus is on the ongoing and realized harms, not just potential or future risks or responses, thus classifying this as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Revolut Introduces Call Identification Tool to Counter Rising Imppersonation Scams - FinanceFeeds

2026-01-15
FinanceFeeds
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (machine learning, behavioral analysis, and detection of AI-generated deepfake voices) in the development and use of a fraud prevention tool. The harms described (financial losses from impersonation scams and authorized push payment fraud) are real and ongoing, and the AI system's role is pivotal in mitigating these harms. Since the article focuses on the deployment of an AI system to prevent harm caused by AI-driven fraud, and the harm (fraud losses) is already occurring, this qualifies as an AI Incident. The event is not merely a potential risk or a general update but describes a concrete response to an existing AI-related harm.
Thumbnail Image

Crypto Scams Hit Record Highs in 2025 as AI And Impersonation Fuel A New Wave of Fraud - Tekedia

2026-01-16
Tekedia
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI-enabled scams are significantly more profitable and have contributed to a record high in crypto fraud losses, with billions stolen and many victims affected. The AI systems are integral to the scam operations, enabling impersonation and social engineering at scale, which directly causes financial harm to people. This fits the definition of an AI Incident because the development and use of AI systems have directly led to harm to groups of people (financial injury and harm to communities). The presence of AI is clear, the harm is realized, and the connection between AI use and harm is direct and substantial.
Thumbnail Image

Hey Grok, supercharge my crypto scam! | Investment Executive

2026-01-16
Investment Executive
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI tools are leveraged by scammers to conduct more effective and larger-scale crypto scams, resulting in billions of dollars in losses. The harms include financial injury to individuals (harm to persons/groups) and harm to communities through widespread fraud. The AI systems' use in generating deepfakes and impersonations is a direct causal factor in these harms. Hence, this is an AI Incident as the AI system's use has directly led to significant harm.
Thumbnail Image

Chainalysis Report: AI Is Fueling a New Era of High-Impact Crypto Scams

2026-01-16
Cointribune
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI tools such as deepfake videos, face-swap software, and language models are being used by scammers to perpetrate crypto fraud at unprecedented scale and impact. The harms include financial losses totaling $17 billion, exploitation of victims, and human trafficking. These harms fall under injury to persons and harm to communities. The AI systems' use in generating convincing fake content and automating messaging is a direct factor in enabling these scams. Hence, this qualifies as an AI Incident due to realized harm caused by AI-enabled scam operations.