
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
Researchers from the University of California uncovered that third-party AI routing services, used to connect AI agents with LLM providers, are vulnerable to attacks. Malicious routers were found injecting harmful code and stealing sensitive data, resulting in real cryptocurrency theft and credential exfiltration, exposing a critical supply chain risk in AI development environments.[AI generated]
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (LLM routers) that process and route AI requests. The malicious actions of these routers, including code injection and credential theft, have directly led to harm in the form of cryptocurrency theft, which is harm to property. The researchers demonstrated actual loss of Ether, confirming realized harm. The event is not merely a potential risk but a realized incident involving AI misuse or malfunction. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.[AI generated]