AI Deepfakes Drive $4.6 Billion Surge in Crypto Scams

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A joint report by Bitget, SlowMist, and Elliptic reveals that AI-generated deepfakes and social engineering scams caused $4.6 billion in crypto losses in 2024. Scammers used synthetic videos, fake calls, and impersonations of trusted figures to deceive victims, highlighting AI's growing role in sophisticated financial fraud.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves the use of AI systems (deepfake generation) in the execution of social engineering scams that have directly led to financial harm and security breaches for individuals. The AI's role is pivotal in increasing the success rate of these scams by creating convincing fake videos and communications. Therefore, this constitutes an AI Incident due to realized harm caused by AI-enabled fraudulent activities.[AI generated]
AI principles
AccountabilitySafetyPrivacy & data governanceTransparency & explainabilityDemocracy & human autonomy

Industries
Financial and insurance services

Affected stakeholders
Consumers

Harm types
Economic/Property

Severity
AI incident

AI system task:
Content generation

In other databases

Articles about this incident or hazard

Thumbnail Image

AI deepfakes are crypto's biggest threat: Bitget, SlowMist, Elliptic warn

2025-06-10
crypto.news
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (deepfake generation) in the execution of social engineering scams that have directly led to financial harm and security breaches for individuals. The AI's role is pivotal in increasing the success rate of these scams by creating convincing fake videos and communications. Therefore, this constitutes an AI Incident due to realized harm caused by AI-enabled fraudulent activities.
Thumbnail Image

87 deepfake scam rings taken down across Asian in Q1 2025: Bitget Report

2025-06-10
Cointelegraph
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly through the use of AI-generated deepfakes to conduct scams, which have directly caused financial harm and deception. The dismantling of scam rings indicates that these harms have materialized. The harms include fraud losses and violations of trust, which fall under harm to communities and individuals. Therefore, this qualifies as an AI Incident because the AI system's use directly led to realized harm.
Thumbnail Image

AI Deepfakes Fuel $4.6B Crypto Scams Surge: 2025 Report

2025-06-10
cryptonews.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (deepfake technology, AI-generated synthetic videos, AI-driven scam rings) being used to perpetrate crypto scams that have directly caused billions of dollars in losses to victims. This constitutes direct harm to individuals (financial harm) and communities (crypto user communities). The AI systems' use in deception and fraud fits the definition of an AI Incident, as the AI system's use has directly led to significant harm. Although the article also discusses responses and future risks, the primary focus is on the realized harm from AI-enabled scams, making this an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Crypto Scams With AI Deepfakes Cost Victims $4.6B in 2024, Marking a 24% Increase

2025-06-10
CCN - Capital & Celeb News
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI deepfake technology in crypto scams that have caused $4.6 billion in losses in 2024, demonstrating direct harm to victims' property. The AI system's role is pivotal as it enables highly convincing impersonations and fraudulent promotions that facilitate these scams. This meets the definition of an AI Incident because the development and use of AI systems have directly led to significant harm. The article does not merely warn of potential harm but documents ongoing and realized harm, excluding classification as an AI Hazard or Complementary Information.
Thumbnail Image

New Bitget Report Shows Harrowing Details of DeepFake and Zoom Crypto Scams

2025-06-10
BeInCrypto
Why's our monitor labelling this an incident or hazard?
The article explicitly details the use of AI deepfake synthesis tools to create realistic fake videos and audio that have been used to deceive victims in crypto scams, resulting in actual financial losses and identity hijacking. The harms are direct and materialized, including theft of funds and compromise of user accounts. The AI systems' role is pivotal in fabricating convincing fake identities and meetings, which are central to the scams' success. This meets the definition of an AI Incident, as the AI system's use has directly led to harm to persons (financial injury) and harm to communities (loss of trust in the crypto ecosystem).
Thumbnail Image

Research From Bitget Anti-Scam Month (2025) Looks At The Current Challenges And Ways To Prevent Crypto-related Scams - FinanceFeeds

2025-06-10
FinanceFeeds
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (deepfake technology, AI-based social engineering, AI tools for identity falsification) in the commission of crypto-related scams that have directly led to financial harm to individuals and communities. The harms described include deception causing loss of funds, which fits the definition of an AI Incident under harm to communities and property. The article also discusses responses to these harms but the primary focus is on the realized harms caused by AI-enabled scams, not just potential or future risks. Therefore, this qualifies as an AI Incident.
Thumbnail Image

Research From Bitget Anti-Scam Month (2025) Looks At The Current Challenges And Ways To Prevent Crypto-related Scams - The Industry Spread

2025-06-10
The Industry Spread
Why's our monitor labelling this an incident or hazard?
The article clearly describes AI systems being used in the development and execution of scams that have caused direct financial harm to people, such as deepfake videos promoting fraudulent crypto platforms and AI-assisted identity fraud. These are concrete examples of AI systems being misused to cause harm, fulfilling the criteria for an AI Incident. Although the article also discusses prevention and detection efforts, the main narrative centers on the actual harms caused by AI-enabled scams, not just potential risks or responses. Therefore, the event qualifies as an AI Incident due to the direct link between AI misuse and realized harm to individuals and communities in the crypto ecosystem.
Thumbnail Image

AI Deepfakes Plague Crypto, Fueling $4.6B Scam Surge: Report | Blockchain AI | CryptoRank.io

2025-06-10
CryptoRank
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated deepfakes and AI-driven scam rings that have caused realized financial harm to victims in the crypto industry. The harms include deception, theft, and fraud resulting in billions of dollars lost. The AI systems' use in creating convincing synthetic content and social engineering tactics directly led to these harms. Therefore, this event meets the criteria for an AI Incident as the AI system's use has directly led to significant harm to people and communities.
Thumbnail Image

Nearly 40% of All Crypto Scams Involve Deepfake Technology : Bitget Report

2025-06-11
Coingape
Why's our monitor labelling this an incident or hazard?
The report explicitly states that AI-generated deepfakes and AI arbitrage bots are actively used in crypto scams causing substantial financial losses. The harms are realized and directly linked to the AI systems' use in social engineering and fraud schemes. This fits the definition of an AI Incident, as the AI system's use has directly led to significant harm to communities (financial losses) and individuals. The event is not merely a potential risk or a general update but documents ongoing harm caused by AI-enabled scams.
Thumbnail Image

The Bitget Anti-Truffa Report shows that AI-related scams caused $4.6 billion in crypto losses in 2024

2025-06-11
The Cryptonomist
Why's our monitor labelling this an incident or hazard?
The report explicitly links AI technologies such as deepfake and AI-driven social engineering to large-scale financial scams resulting in billions of dollars lost by victims. The AI systems' use in generating fake identities and communications directly contributes to the harm experienced by users, fulfilling the criteria for an AI Incident. The harm is materialized and significant, involving violations of property rights and harm to communities through financial fraud. The presence and misuse of AI systems are clear and central to the incident described.
Thumbnail Image

Bitget Reports $4.6 Billion Lost to AI-Linked Crypto Scams in 2024

2025-06-11
Cointribune
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI-powered frauds, such as deepfake impersonations and AI-enhanced social engineering, have directly led to massive financial losses ($4.6 billion) in the crypto space. This constitutes injury to property and harm to communities. The involvement of AI systems in the development and use of these scams is clear and central to the harm described. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Bitget Anti-Scam Report Shows AI-Related Scams Drive $4.6B in Crypto Losses in 2024

2025-06-11
CryptoTicker
Why's our monitor labelling this an incident or hazard?
The report explicitly links the use of AI technologies such as deepfakes and synthetic videos to actual scams that have caused substantial financial harm to individuals and groups. This constitutes direct harm caused by the use of AI systems in malicious activities, fulfilling the criteria for an AI Incident. The event details realized harm rather than potential harm, and the AI system's role in enabling these scams is pivotal.
Thumbnail Image

AI-Powered Scams Surge: How To Protect Yourself - Crypto Weekly

2025-06-11
Crypto Weekly
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-powered scams using deepfakes and AI-generated personas to impersonate individuals and conduct fraudulent activities, which have caused actual financial harm to investors. This fits the definition of an AI Incident because the AI system's use has directly led to harm (financial loss) to people. The article also discusses the difficulty of recovering stolen funds due to blockchain obfuscation tools, reinforcing the harm caused. The focus is on realized harm rather than potential harm or general information, so it is not an AI Hazard or Complementary Information. It is not unrelated because AI systems are central to the scam operations described.
Thumbnail Image

Crypto Scams Cost Investors $4.6 Billion in 2024: Bitget

2025-06-12
Gadgets 360
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI-driven social engineering tactics and deepfake technologies are being used by scammers to deceive crypto investors, causing direct financial harm amounting to billions of dollars. This constitutes injury or harm to groups of people (financial harm), fulfilling the criteria for an AI Incident. The AI systems are used maliciously to generate fake content and phishing bots, directly leading to harm. Therefore, this event is classified as an AI Incident.
Thumbnail Image

Crypto Scam Losses Exceeded $4Billion in 2024, Driven by Deepfake and AI Tech, Says Bitget | The Fintech Times

2025-06-12
The Fintech Times
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-powered scams causing over $4 billion in losses, involving deepfake videos and synthetic calls, which are AI systems generating deceptive content. This misuse of AI has directly led to significant financial harm to users, fitting the definition of an AI Incident due to realized harm (financial loss) caused by AI system use (scam generation). The involvement of AI in the scam tactics and the resulting harm to communities and individuals justifies classification as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

1615 milliárd forint: ennyi pénzt tüntettek el az AI-t használó kriptós csalók tavaly

2025-06-10
Portfolio.hu
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems by criminals to conduct and conceal cryptocurrency scams, which have directly caused significant financial harm to victims. The AI systems are used in the development and execution of fraudulent schemes, including phishing and money laundering, which fits the definition of an AI Incident as the AI system's use has directly led to harm (financial loss).
Thumbnail Image

Több milliárd dollár veszteséget okoztak a kriptovaluta-csalások 2024-ben

2025-06-10
Privátbankár.hu
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-based fraud methods such as deepfake videos and AI-enhanced phishing tactics that have caused substantial financial losses. These harms fall under significant harm to property and communities. The AI systems are actively used in the commission of these crimes, making the AI involvement direct and causal to the harm. Therefore, this event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Közel 5 milliárd dollárnyi veszteséget okoztak az MI-vezérelt kriptovaluta-csalások tavaly

2025-06-11
hirado.hu
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI involvement in cryptocurrency scams, including AI-generated deepfake videos and other AI-enabled fraudulent methods. The harm is direct financial loss to victims, which qualifies as harm to property and communities. Therefore, this event meets the criteria for an AI Incident because the development and use of AI systems have directly led to significant harm.
Thumbnail Image

MI-vezérelt kriptovaluta-csalások: csaknem 5 milliárd dollárnyi veszteséget okoztak 2024-ben

2025-06-10
adozona.hu
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI (referred to as 'MI' in Hungarian) being used to support cryptocurrency scams that have resulted in substantial financial losses worldwide. The harms described include deception through AI-generated deepfakes, phishing, and Ponzi schemes, all of which have directly caused monetary harm to victims. This fits the definition of an AI Incident because the AI system's use has directly led to harm (financial loss) to people and communities. Therefore, the event qualifies as an AI Incident.
Thumbnail Image

Ezermilliárdos veszteséget okoztak a mesterséges intelligencia turpisságai

2025-06-11
Economx.hu
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems in the execution of sophisticated fraud schemes that have directly led to substantial financial losses (harm to property). The AI's role in enabling these scams to be more effective and harder to detect establishes a direct link to harm. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to significant harm.