AI-Driven Attacks Fuel Major Crypto Thefts in 2026

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

In 2026, over $600 million was stolen in crypto hacks, with AI systems enabling large-scale attacks. North Korean-linked groups used AI for social engineering, deepfakes, and automated vulnerability scanning, leading to major breaches at Kelp DAO, Drift Protocol, and Zerion. AI's role has amplified the scale and sophistication of these incidents.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions AI being used in social engineering attacks that resulted in theft, AI-powered deepfakes and voice manipulation tools sold for bypassing security, and autonomous AI agents conducting attacks. These uses of AI have directly caused significant financial harm, fulfilling the criteria for an AI Incident. The harms are realized, not just potential, and the AI systems' development and use are pivotal in enabling these attacks. Therefore, this event is classified as an AI Incident.[AI generated]
AI principles
Robustness & digital securitySafety

Industries
Digital securityFinancial and insurance services

Affected stakeholders
ConsumersBusiness

Harm types
Economic/PropertyReputational

Severity
AI incident

Business function:
ICT management and information security

AI system task:
Content generationEvent/anomaly detection


Articles about this incident or hazard

Thumbnail Image

Phishing, Deepfakes To Fuel 2026's Biggest Crypto Hacks

2026-04-23
Cointelegraph
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI being used in social engineering attacks that resulted in theft, AI-powered deepfakes and voice manipulation tools sold for bypassing security, and autonomous AI agents conducting attacks. These uses of AI have directly caused significant financial harm, fulfilling the criteria for an AI Incident. The harms are realized, not just potential, and the AI systems' development and use are pivotal in enabling these attacks. Therefore, this event is classified as an AI Incident.
Thumbnail Image

Crypto lost $600M to hacks in 2026, AI is making it worse

2026-04-23
The News International
Why's our monitor labelling this an incident or hazard?
The article explicitly links AI-powered techniques to successful cyberattacks resulting in financial losses, including a $100,000 theft via AI-driven social engineering and the sale of AI-based deepfake tools for bypassing security checks. These constitute direct harms to property and financial assets caused or facilitated by AI systems. Therefore, the event meets the criteria for an AI Incident, as AI's use in the attacks has directly led to significant harm. The mention of defensive AI and government responses provides complementary context but does not overshadow the primary incident nature of the report.
Thumbnail Image

Phishing, Deepfakes, and Supply Chain Attacks to Drive 2026's Biggest Crypto Hacks: CertiK - FinanceFeeds

2026-04-23
FinanceFeeds
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated deepfakes and automated AI tools that enable more convincing phishing and faster, more effective attacks. These AI systems are directly contributing to significant financial losses in the crypto ecosystem, which constitutes harm to property. The involvement of AI in the development and use of these attack methods, and the resulting realized harm (multi-million dollar exploits), fits the definition of an AI Incident. The article does not merely warn of potential future harm but reports ongoing and realized attacks facilitated by AI.
Thumbnail Image

CertiK warns AI misuse and infrastructure gaps to drive 2026 crypto hacks

2026-04-23
crypto.news
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems being used maliciously to automate exploit development, generate deepfakes for social engineering, and conduct attacks at machine speed, which have directly resulted in large-scale thefts and security breaches in the crypto ecosystem. These are clear examples of AI systems' use leading to realized harm (financial loss and security compromise), fitting the definition of an AI Incident. The mention of defensive AI use and broader threat environment context complements the incident description but does not overshadow the primary classification as an AI Incident.
Thumbnail Image

Phishing, Deepfakes to Dominate Crypto Hacks by 2026: CertiK

2026-04-23
blockchain.news
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-enabled social engineering used in a hack that resulted in theft, and describes how AI tools are being used offensively to generate deepfakes and scan for vulnerabilities, leading to large-scale financial losses. These constitute direct harms to property and communities. Therefore, the event qualifies as an AI Incident because the development and use of AI systems have directly led to significant financial harm through cyberattacks. The discussion of defensive AI and regulatory responses serves as complementary information but does not negate the presence of realized harm.