Malicious LiteLLM PyPI Package Compromises AI Developer Systems

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

The popular AI middleware Python package LiteLLM was compromised on PyPI, with versions 1.82.7 and 1.82.8 containing malicious code that stole credentials and enabled backdoor access. The attack, attributed to TeamPCP, exposed developer and cloud environments to significant risk, affecting systems relying on AI agent stacks globally.[AI generated]

Why's our monitor labelling this an incident or hazard?

The incident involves the malicious use of an AI-related software package (litellm) that is part of the AI ecosystem. The compromise led to direct harm by enabling credential theft and unauthorized access to cloud and developer environments, which constitutes harm to property and potentially to communities relying on these systems. The AI system's development and use (the package as an AI abstraction layer) was exploited maliciously, causing direct harm. Therefore, this qualifies as an AI Incident due to realized harm caused by the AI system's compromise and misuse.[AI generated]
AI principles
Robustness & digital securityPrivacy & data governance

Industries
Digital securityIT infrastructure and hosting

Affected stakeholders
WorkersBusiness

Harm types
Economic/PropertyHuman or fundamental rights

Severity
AI incident

Business function:
Research and development


Articles about this incident or hazard

Thumbnail Image

Litellm Compromise Reveals Backdoor Risk: Multi-Stage Malware Steals Cloud, Crypto and Chat Keys

2026-03-25
El-Balad.com
Why's our monitor labelling this an incident or hazard?
The incident involves the malicious use of an AI-related software package (litellm) that is part of the AI ecosystem. The compromise led to direct harm by enabling credential theft and unauthorized access to cloud and developer environments, which constitutes harm to property and potentially to communities relying on these systems. The AI system's development and use (the package as an AI abstraction layer) was exploited maliciously, causing direct harm. Therefore, this qualifies as an AI Incident due to realized harm caused by the AI system's compromise and misuse.
Thumbnail Image

Compromised LiteLLM Package With 95M Downloads Tied to TeamPCP, After Trivy & KICS Hacks - IT Security News

2026-03-25
IT Security News - cybersecurity, infosecurity news
Why's our monitor labelling this an incident or hazard?
LiteLLM is an AI system used to route requests to LLM providers, so its compromise involves AI system misuse. The injection of malicious code into the library's versions is a direct misuse of the AI system's development and distribution, constituting an AI Incident because it has directly led to a security breach affecting users and systems. The high download volume indicates widespread impact potential. The event describes realized harm through the supply chain attack, not just a potential risk, so it qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

AI Agents 021 -- LiteLLM Got Owned: What the PyPI Supply Chain Attack Means for Your AI Agent Stack

2026-03-25
Medium
Why's our monitor labelling this an incident or hazard?
The event involves an AI system component (LiteLLM) used in AI agent stacks, which is an AI middleware interfacing with large language model providers. The malicious code embedded in the package directly caused harm by stealing credentials, leading to full compromise of affected systems. This constitutes an AI Incident because the development and use of the AI system (LiteLLM) directly led to realized harm (credential theft and system compromise). The article is not merely about potential risks or responses but reports on an actual attack that occurred and caused harm.