Malicious AI Routers Enable Cryptocurrency and Credential Theft

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Researchers from the University of California uncovered that third-party AI routing services, used to connect AI agents with LLM providers, are vulnerable to attacks. Malicious routers were found injecting harmful code and stealing sensitive data, resulting in real cryptocurrency theft and credential exfiltration, exposing a critical supply chain risk in AI development environments.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event explicitly involves AI systems (LLM routers) that process and route AI requests. The malicious actions of these routers, including code injection and credential theft, have directly led to harm in the form of cryptocurrency theft, which is harm to property. The researchers demonstrated actual loss of Ether, confirming realized harm. The event is not merely a potential risk but a realized incident involving AI misuse or malfunction. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.[AI generated]
AI principles
Robustness & digital securityPrivacy & data governance

Industries
IT infrastructure and hostingDigital security

Affected stakeholders
ConsumersBusiness

Harm types
Economic/PropertyHuman or fundamental rights

Severity
AI incident

Business function:
ICT management and information security

AI system task:
Other


Articles about this incident or hazard

Thumbnail Image

AI Routers Can Steal Credentials and Crypto - Research

2026-04-13
Cointelegraph
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (LLM routers) that process and route AI requests. The malicious actions of these routers, including code injection and credential theft, have directly led to harm in the form of cryptocurrency theft, which is harm to property. The researchers demonstrated actual loss of Ether, confirming realized harm. The event is not merely a potential risk but a realized incident involving AI misuse or malfunction. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

As AI agents scale in crypto, researchers warn of a critical security gap

2026-04-13
CoinDesk
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (LLM routers) that process and forward user requests to AI models and have been exploited to steal credentials and drain wallets, causing realized financial harm. The harm is direct and material, involving theft of crypto assets due to malicious manipulation of AI infrastructure. This fits the definition of an AI Incident because the AI system's use and malfunction have directly led to harm to property and users' funds. The researchers' findings and examples of drained wallets confirm the harm has occurred, not just a potential risk.
Thumbnail Image

Crypto Security Faces New Test As Rogue AI Agents Emerge | Bitcoinist.com

2026-04-14
Bitcoinist.com
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (LLM routers) that process and route encrypted data in plaintext, enabling malicious operators to steal credentials and cryptocurrency. The researchers demonstrated actual harm by draining a crypto wallet, showing direct financial loss. The misuse of AI infrastructure to steal sensitive information and funds constitutes a violation of property rights and harms individuals and communities relying on crypto security. The harm is realized, not just potential, and the AI system's role is pivotal in enabling this theft. Hence, this event meets the criteria for an AI Incident.
Thumbnail Image

Malicious AI Agent Routers Could Become New Crypto Theft Plague

2026-04-13
cryptonews.com
Why's our monitor labelling this an incident or hazard?
The article explicitly identifies AI agent routers as the AI systems involved, which are part of the AI supply chain for large language models used in blockchain and DeFi applications. The malicious routers have directly caused harm by stealing ETH and exposing credentials, which is a clear financial harm to property and communities. The harm is not hypothetical but has already occurred in the wild, as confirmed by the researchers' findings. The event involves the use and misuse of AI systems leading to direct harm, fitting the definition of an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Will AI Steal Your Bitcoin? New Research Reveals 26 Malicious LLM Routers Linked to Crypto Theft

2026-04-13
CCN - Capital & Celeb News
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (LLM routers and AI agents) whose development and use have directly led to harm—specifically, theft of cryptocurrency through malicious code injection and credential exfiltration. The researchers demonstrated actual loss of Ethereum funds, confirming realized harm to property. The AI system's role is pivotal as the compromised routers enable attackers to manipulate AI agent commands and steal credentials. This fits the definition of an AI Incident because the AI system's malfunction or misuse has directly caused harm.
Thumbnail Image

Dangerous AI Routers Targeting Cryptocurrency Developers: A New Security Threat - Blockonomi

2026-04-13
Blockonomi
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI routing platforms that handle API traffic for large language models, which are AI systems. These platforms terminate encrypted connections, exposing sensitive data such as private keys and seed phrases. The malicious insertion of harmful instructions and credential theft has been demonstrated, including a real theft of Ether from a test wallet. The harm is direct and material, involving theft and security compromise. The AI system's malfunction or malicious use is pivotal to the incident. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

LLM Routers Are Stealing Crypto: What This Study Found

2026-04-14
Live Bitcoin News
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (LLM API routers) that mediate AI coding agents' interactions with upstream models. The malicious behavior of these routers, including injecting code and exfiltrating secrets, has directly caused harm by draining an actual Ethereum wallet, which constitutes harm to property. The study documents realized harm, not just potential risk, and the AI system's malfunction or malicious use is pivotal to the incident. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

AI Router Vulnerabilities Allow Attackers to Inject Malicious Code and Steal Sensitive Data

2026-04-10
Cyber Security News
Why's our monitor labelling this an incident or hazard?
The article explicitly details how the development and use of AI systems (AI agents relying on LLM API routers) have directly led to realized harms including malicious code injection, theft of cryptocurrency, and credential exfiltration. These harms fall under harm to property and harm to individuals or groups through data breaches. The AI system's role is pivotal as the vulnerabilities arise from the AI agent ecosystem's reliance on these routers. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information, as the harm is occurring and documented.
Thumbnail Image

UC researchers warn third-Party AI routers are stealing crypto and private keys

2026-04-13
crypto.news
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems, specifically AI routing services managing access to LLM providers. The researchers demonstrated that malicious use of these AI systems led directly to theft of cryptocurrency and cloud credentials, which is harm to property and a violation of security rights. The harm is realized, not just potential, as evidenced by the successful draining of Ether from a decoy wallet. Therefore, this event qualifies as an AI Incident because the development and use of these AI routing systems directly led to significant harm.
Thumbnail Image

Researchers flag AI routers that can drain wallets

2026-04-14
coininsider.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly described as AI model routers that handle requests between agents and AI model providers. The malicious use and vulnerabilities of these routers have directly led to harm, including the theft of Ether from a private key controlled by researchers, which is harm to property. The researchers' findings also highlight systemic security risks that could lead to further incidents. Therefore, this qualifies as an AI Incident because the AI system's use and malfunction have directly caused harm.