AuthonAuthon Blog
debugging7 min read

How to Detect and Recover From a Compromised PyPI Package

How to detect, respond to, and prevent PyPI supply chain attacks like the compromised LiteLLM package versions that exfiltrated environment variables.

AW
Alan West
Authon Team
How to Detect and Recover From a Compromised PyPI Package

So you wake up, check your Slack, and someone's posted a link to a GitHub issue claiming that a package you depend on — one sitting in your requirements.txt right now — has been compromised on PyPI. Your stomach drops.

That's exactly what happened to developers using LiteLLM recently when versions 1.82.7 and 1.82.8 on PyPI were found to contain malicious code. The compromised versions included a payload designed to exfiltrate environment variables — API keys, database credentials, secrets — to an attacker-controlled server. If you had those versions installed and running, your secrets may have been shipped off to someone you definitely didn't intend.

This isn't hypothetical. This is a real supply chain attack, and it's a pattern we're seeing more and more. Let me walk you through how to check if you're affected, how to respond, and how to protect yourself going forward.

Understanding the Attack Vector

Supply chain attacks against PyPI packages typically work in one of a few ways:

  • Account compromise — An attacker gains access to a maintainer's PyPI credentials and pushes a malicious release
  • Typosquatting — A package with a similar name is published (not the case here)
  • Build pipeline injection — The CI/CD system that publishes the package is compromised

In LiteLLM's case, the malicious versions were published to the real package. This is the scariest kind — you can't catch it by double-checking the package name. The code injection targeted environment variables, which makes sense from an attacker's perspective. LLM proxy libraries are goldmines because developers routinely store OpenAI, Anthropic, and other provider API keys in their environment.

The malicious payload was embedded in the package code and executed during normal import or runtime. It would quietly read os.environ, serialize the contents, and POST them to an external endpoint. No errors, no warnings, nothing in your logs unless you were watching outbound network traffic very carefully.

Step 1: Check If You're Running a Compromised Version

First, figure out what you actually have installed:

bash
# Check your installed version
pip show litellm | grep Version

# If you're using a lockfile, check there too
grep litellm requirements.txt
grep litellm poetry.lock
grep litellm Pipfile.lock

If you see version 1.82.7 or 1.82.8, you are affected. But also check your Docker images, CI environments, and any deployment artifacts — anywhere packages get installed.

bash
# Check across multiple virtual environments
find / -name "litellm" -path "*/site-packages/*" 2>/dev/null

# Inspect the installed package metadata for the exact version
pip show litellm --verbose

Step 2: Contain the Damage

If you were running a compromised version, treat your environment variables as leaked. All of them. Not just the ones you think are important.

bash
# Immediately pin to a known safe version
pip install litellm==1.82.6  # last known good version before compromise

# Or pin to whatever safe version the maintainers have published post-incident
pip install litellm --upgrade  # only after confirming the latest is clean

Then start rotating secrets. I know this is painful, but it's non-negotiable:

  • LLM API keys — Rotate every OpenAI, Anthropic, Cohere, or other provider key that was in the environment
  • Database credentials — If your DB connection strings were in env vars, rotate them
  • Cloud provider credentials — AWS keys, GCP service accounts, Azure tokens
  • Internal service tokens — Anything else sitting in the environment
bash
# Example: regenerate an OpenAI API key via their dashboard, then update
export OPENAI_API_KEY="sk-new-rotated-key-here"

# If you use AWS, rotate access keys
aws iam create-access-key --user-name your-service-user
aws iam delete-access-key --user-name your-service-user --access-key-id OLD_KEY_ID

Check your cloud provider billing dashboards too. Stolen LLM API keys often get used immediately for inference abuse, and you might see a spike in usage.

Step 3: Audit Your Network Logs

If you have outbound traffic logs (and you should), look for unusual POST requests originating from your application during the window when the compromised package was active.

bash
# If you have access to firewall or proxy logs, search for the timeframe
# when the compromised version was installed
# Look for outbound HTTP requests to unfamiliar domains

# If you use tcpdump or similar (in a safe, isolated environment)
tcpdump -i any -A 'tcp port 443 or tcp port 80' | grep -i "POST"

This helps you understand the blast radius. If you can confirm the exfiltration endpoint was contacted, you know for certain your secrets were sent.

Prevention: How to Not Get Burned Next Time

Here's where it gets real. You can't prevent a maintainer's account from being compromised, but you can limit the damage.

Pin Your Dependencies

Stop using loose version specifiers in production. Just stop.

txt
# Bad — you'll auto-install whatever is newest, including compromised versions
litellm>=1.80

# Good — explicit version pinning
litellm==1.82.6

# Better — use a hash-pinned lockfile
# pip-compile with --generate-hashes
litellm==1.82.6 \
    --hash=sha256:abc123...

Hash pinning is the gold standard here. If an attacker publishes a new version with the same number (after yanking and re-uploading), hash verification will catch the mismatch.

Use pip-audit or Safety

Integrate vulnerability scanning into your CI pipeline:

bash
# pip-audit checks against the Python Packaging Advisory Database
pip install pip-audit
pip-audit

# Run it in CI — fail the build on known vulnerabilities
pip-audit --strict

This won't catch zero-day compromises (nothing will in the first few hours), but it'll flag known-bad versions once advisories are published.

Isolate Your Secrets

The payload worked because secrets were in environment variables that any code in the process could read. Consider:

  • Secret managers — Use AWS Secrets Manager, HashiCorp Vault, or GCP Secret Manager instead of raw env vars. The secret is fetched at runtime and doesn't linger in os.environ.
  • Least-privilege environments — Don't give your LLM proxy container access to your database credentials. Segment your secrets by service.
  • Monitoring egress traffic — Set up network policies that restrict outbound connections from your application containers. A legitimate LLM proxy only needs to talk to known API endpoints.

Watch for PyPI Advisories

Subscribe to security advisories for your critical dependencies:

  • PyPI advisory database
  • GitHub's Dependabot alerts (free for public and private repos)
  • The gh api /repos/OWNER/REPO/vulnerability-alerts endpoint

The Bigger Picture

Supply chain attacks on package registries aren't new, but they're accelerating. The npm ecosystem has been dealing with this for years, and Python is catching up — in the worst way. The LiteLLM incident is a reminder that pip install is an act of trust.

Every package you install gets to run arbitrary code in your environment. Every dependency of that package does too. And most of us have no idea what's actually in our dependency tree.

I've started doing something I used to think was overkill: I periodically review the diff between pinned versions before upgrading. For critical infrastructure packages, it's worth the twenty minutes. Run pip download litellm==1.82.9 --no-deps, unpack it, and actually look at what changed. You won't catch everything, but you might catch the obvious stuff.

The uncomfortable truth is that there's no silver bullet here. Pin your deps, hash your lockfiles, scan in CI, monitor your network egress, and segment your secrets. None of these are perfect alone, but layered together they turn a catastrophic breach into a contained incident.

Stay safe out there. And go rotate those keys.

How to Detect and Recover From a Compromised PyPI Package | Authon Blog