Tell HN: Litellm 1.82.7 and 1.82.8 on PyPI are compromised

873 points by dot_treo a day ago on hackernews | 461 comments

bfeynman | a day ago

pretty horrifying. I only use it as lightweight wrapper and will most likely move away from it entirely. Not worth the risk

[OP] dot_treo | a day ago

Even just having an import statement for it is enough to trigger the malware in 1.82.8.

iwhalen | a day ago

What is happening in this issue thread? Why are there 100+ satisfied slop comments?

kevml | a day ago

Potentially compromised?

cirego | a day ago

First thing I noticed too.

nubg | a day ago

Are they trying to slide stuff down? but it just bumps stuff up?

bakugo | a day ago

Attackers trying to stifle discussion, they did the same for trivy: https://github.com/aquasecurity/trivy/discussions/10420

Imustaskforhelp | a day ago

I have created an comment to hopefully steer the discussion towards hackernews if the threat actor is stifling genuine comments in github by spamming that thread with 100's of accounts

https://github.com/BerriAI/litellm/issues/24512#issuecomment...

kevml | a day ago

ddp26 | 22 hours ago

Yeah, this was my team at FutureSearch that had the lucky experience of being first to hit this, before the malware was disclosed.

One thing not in that writeup is that very little action was needed for my engineer to get pwnd. uvx automatically pulled latest litellm (version unpinned) and built the environment. Then Cursor started up the local MCP server automatically on load.

cpburns2009 | a day ago

jbkkd | a day ago

tinix | 9 hours ago

these links trigger prefetch in chrome (doesn't respect nofollow rel).

I got popped by our security team, they were convinced I had this malware because my machine attempted to connect to the checkmarx domain.

clearly a false positive but I still had to roll credentials and wipe my machine.

bratao | a day ago

Look like the Founder and CTO account has been compromised. https://github.com/krrishdholakia

jadamson | a day ago

Most his recent commits are small edits claiming responsibility on behalf of "teampcp", which was the group behind the recent Trivy compromise:

https://news.ycombinator.com/item?id=47475888

soco | a day ago

I was just wondering why the Trivy compromise hit only npm packages, thinking that bigger stuff should appear sooner or later. Here we go...

deep_noz | a day ago

good i was too lazy to bump versions

jadamson | a day ago

In case you missed it, according to the OP, the previous point release (1.82.7) is also compromised.

[OP] dot_treo | a day ago

Yeah, that release has the base64 blob, but it didn't contain the pth file that auto triggers the malware on import.

jadamson | a day ago

The latest version with the the pth file doesn't require an import to trigger the exploit (just having the package installed is enough thanks to [1]).

The previous version triggers on `import litellm.proxy`

Again, all according to the issue OP.

[1] https://docs.python.org/3/library/site.html

hiciu | a day ago

Besides main issue here, and the owners account being possibly compromised as well, there's like 170+ low quality spam comments in there.

I would expect better spam detection system from GitHub. This is hardly acceptable.

orf | a day ago

i'm guessing it's accounts they have compromised with the stealer.

ebonnafoux | a day ago

They repeat only six sentences during 100+ comments:

Worked like a charm, much appreciated.

This was the answer I was looking for.

Thanks, that helped!

Thanks for the tip!

Great explanation, thanks for sharing.

This was the answer I was looking for.

dec0dedab0de | a day ago

Over the last ~15 years I have been shocked by the amount of spam on social networks that could have been caught with a Bayesian filter. Or in this case, a fairly simple regex.

Imustaskforhelp | a day ago

Well, large companies/corporations don't care about Spam because they actually benefit from spam in a way as it boosts their engagement ratio

It just doesn't have to be spammed enough that advertisers leave the platform and I think that they sort of succeed in doing so.

Think about it, if Facebook shows you AI slop ragebait or any rage-inducing comment from multiple bots designed to farm attention/for malicious purposes in general, and you fall for it and show engagement to it on which it can show you ads, do you think it has incentive to take a stance against such form of spam

dec0dedab0de | a day ago

Yeah, I almost included that part in my comment, but it still sucks.

dewey | 20 hours ago

> Well, large companies/corporations don't care about Spam because they actually benefit from spam in a way as it boosts their engagement ratio

I'm not sure that's actually true. It's just that at scale this is still a hard problem that you don't "just" fix by running a simple filter as there will be real people / paying customers getting caught up in the filter and then complain.

Having "high engagement" doesn't really help you if you are optimizing for advertising revenue, bots don't buy things so if your system is clogged up by fake traffic and engagement and ads don't reach the right target group that's just a waste.

PunchyHamster | 10 hours ago

It's the bear trash lock problem all over again.

It could be solved by the filter but filter would also have a bunch of false positives

It seems like if the content is this hollow and useless, it shouldn't matter if it was a human or spambot posting it.

ratdoctor | a day ago

Or they're just bots. This repository has 40k+ stars somehow.

snailmailman | a day ago

The same thing occurred on the trivy repo a few days ago. A GitHub discussion about the hack was closed and 700+ spam comments were posted.

I scrolled through and clicked a few profiles. While many might be spam accounts or low-activity accounts, some appeared to be actual GitHub users with a history of contributions.

I’m curious how so many accounts got compromised. Are those past hacks, or is this credential steeling hack very widespread?

Are the trivy and litellm hacks just 2 high profile repos out of a much more widespread “infect as many devs as possible, someone might control a valuable GitHub repository” hack? I’m concerned that this is only the start of many supply chain issues.

Edit: Looking through and several of the accounts have a recent commit "Update workflow configuration" where they are placing a credential stealer into a CI workflow. The commits are all back in february.

consp | 19 hours ago

Once is happenstance. Twice is coincidence. Three times is enemy action.

snailmailman | 2 hours ago

Update: It looks like the accounts have all been deleted by github, including their repos. They are 404 pages now. Their repos + recent malicious commits are all just 404 pages now.

I'm curious what the policy is there if the accounts were compromised. Can the original users "restore" their accounts somehow? For now it appears the accounts are gone. Maybe they were entirely bot accounts but a few looked like compromised "real" accounts to me.

fdsjgfklsfd | 22 hours ago

Reporting spam on GitHub requires you to click a link, specify the type of ticket, write a description of the problem, solve multiple CAPTCHAs of spinning animals, and press Submit. It's absurd.

nickspacek | a day ago

teampcp taking credit?

https://github.com/krrishdholakia/blockchain/commit/556f2db3...

  - # blockchain
  - Implements a skeleton framework of how to mine using blockchain, including the consensus algorithms.
  + teampcp owns BerriAI

rgambee | a day ago

Seems that the GitHub account of one of the maintainers has been fully compromised. They closed the GitHub issue for this problem. And all their personal repos have been edited to say "teampcp owns BerriAI". Here's one example: https://github.com/krrishdholakia/blackjack_python/commit/8f...

rgambee | a day ago

Looking forward to a Veritasium video about this in the future, like the one they recently did about the xz backdoor.

stavros | a day ago

That was massively more interesting, this is just a straight-up hack.

johanyc | 19 hours ago

I don't expect one. This kind of attack is pretty common nowadays. The xz attack was special for how long the guy worked for it and how severe it could have been

TZubiri | a day ago

Thank you for posting this, interesting.

I hope that everyone's course of action will be uninstalling this package permanently, and avoiding the installation of packages similar to this.

In order to reduce supply chain risk not only does a vendor (even if gratis and OS) need to be evaluated, but the advantage it provides.

Exposing yourself to supply chain risk for an HTTP server dependency is natural. But exposing yourself for is-odd, or whatever this is, is not worth it.

Remember that you are programmers and you can just program, you don't need a framework, you are already using the API of an LLM provider, don't put a hat on a hat, don't get killed for nothing.

And even if you weren't using this specific dependency, check your deps, you might have shit like this in your requirements.txt and was merely saved by chance.

An additional note is that the dev will probably post a post-mortem, what was learned, how it was fixed, maybe downplay the thing. Ignore that, the only reasonable step after this is closing a repo, but there's no incentive to do that.

xinayder | a day ago

> Remember that you are programmers and you can just program, you don't need a framework, you are already using the API of an LLM provider, don't put a hat on a hat, don't get killed for nothing.

Programming for different LLM APIs is a hassle, this library made it easy by making one single API you call, and in the backstage it handled all the different API calls you need for different LLM providers.

otabdeveloper4 | a day ago

There's only two different LLM APIs in practice (Anthropic and everyone else), and the differences are cosmetic.

This is like a couple hours of work even without vibe coding tools.

dragonwriter | 18 hours ago

> There's only two different LLM APIs in practice (Anthropic and everyone else), and the differences are cosmetic.

There's more than that (even if most other systems also provide a OpenAI compatible API which may or may not expose either all features of the platform or all features of the OpenAI API), and the differences are not cosmetic, but since LiteLLM itself just presents an OpenAI-compatible API, it can't be providing acccess to other vendor features that don't map cleanly to that API, and I don't think its likely to be using the native API for each and being more complete in its OpenAI-compatible implementation of even the features that map naturally than the first-party OpenAI-compatibility APIs.)

TZubiri | a day ago

>Programming for different LLM APIs is a hassle

That's what they pay us for

I'd get it if it were a hassle that could be avoided, but it feels like you are trying to avoid the very work you are being paid for, like if a MCD employee tried to pay a kid with Happy Meal toys to work the burger stand.

Another red flag, although a bit more arguable, is that by 'abstracting' the api into a more generic one, you achieving vendor neutrality, yes, but you also integrate much more loosely with your vendors, possibly loose unique features (or can only access them with even more 'hassle' custom options, and strategically, your end product will veer into commodity territory, which is not a place you usually want to be.

hrmtst93837 | a day ago

One wrapper cuts API churn, but it also widens the supply-chain blast radius you own.

rcleveng | 19 hours ago

I think almost everyone supports the openai api anyway (even Gemini). Not entirely sure why there needs to be a wrapper.

dragonwriter | 18 hours ago

Msot do, but Anthropic indicates that theirs is "is not considered a long-term or production-ready solution for most use cases" [0]; in any case, where the OpenAI-compatible API isn't the native API, both for cloud vendors other than OpenAI and for self-hosting software, the OpenAI-compatible API is often limited, both because the native API offers features that don't map to the OpenAI API (which a wrapper that presents an OpenAI-compatible API is not going to solve) and because the vendor often lags in implementing support for features in the OpenAI-compatible API—including things like new OpenAI endpoints that may support features that the native API already supports (e.g., adding support for chat completions when completions were the norm, or responses when chat completions were.) A wrapper that used the native API and did its own mapping to OpenAI could, in principle, address that.

[0] https://platform.claude.com/docs/en/api/openai-sdk

circularfoyers | a day ago

Comparing this project to is-odd seems very disingenuous to me. My understanding is this was the only way you could use llama.cpp with Claude Code for example, since llama.cpp doesn't support the Anthropic compatible endpoint and doing so yourself isn't anywhere near as trivial as your comparison. Happy to be corrected if I'm wrong.

jerieljan | a day ago

That's a correct example, and I agree, it is disingenuous to just trivially call this an `is-odd` project.

Back in the days of GPT-3.5, LiteLLM was one of the projects that helped provide a reliable adapter for projects to communicate across AI labs' APIs and when things drifted ever so slightly despite being an "OpenAI-compatible API", LiteLLM made it much easier for developers to use it rather than reinventing and debugging such nuances.

Nowadays, that gateway of theirs isn't also just a funnel for centralizing API calls but it also serves other purposes, like putting guardrails consistently across all connections, tracking key spend on tokens, dispensing keys without having to do so on the main platforms, etc.

There's also more to just LiteLLM being an inference gateway too, it's also a package used by other projects. If you had a project that needed to support multiple endpoints as fallback, there's a chance LiteLLM's empowering that.

Hence, supply chain attack. The GitHub issue literally has mentions all over other projects because they're urged to pin to safe versions since they rely on it.

sschueller | a day ago

Does anyone know a good alternate project that works similarly (share multipple LLMs across a set of users)? LiteLLM has been getting worse and trying to get me to upgrade to a paid version. I also had issues with creating tokens for other users etc.

tacoooooooo | a day ago

pydantic-ai

river_otter | a day ago

github.com/mozilla-ai/any-llm :)

sschueller | a day ago

I just found https://github.com/jasmedia/InferXgate which looks interesting although quite new and not supporting so many providers.

redrove | a day ago

Bifrost is the only real alternative I'm aware of https://github.com/maximhq/bifrost

sschueller | a day ago

Virtual Keys is an Enterprise feature. I am not going to pay for something like this in order to provide my family access to all my models. I can do without cost control (although it would be nice) but I need for users to be able to generate a key and us this key to access all the models I provide.

NeutralCrane | 20 hours ago

I don’t believe it is an enterprise feature. I did some testing on Bifrost just last month on a free open source instance and was able to set up virtual keys.

beanaroo | a day ago

We have tried reaching out to their sales multiple times but never get a response.

treefarmer | a day ago

If you're talking about their proxy offering, I had this exact same issue and switched to Portkey. I just use their free plan and don't care about the logs (I log separately on my own). It's way faster (probably cause their code isn't garbage like the LiteLLM code - they had a 5K+ line Python file with all their important code in it the last time I checked).

howardjohn | 16 hours ago

agentgateway.dev is one I have been working on that is worth a look if you are using the proxy side of LiteLLM. It's open source part of the Linux foundation.

postalcoder | a day ago

This is a brutal one. A ton of people use litellm as their gateway.

Imustaskforhelp | a day ago

Do you feel as if people will update litellm without looking at this discussion/maybe having it be automatic which would then lead to loss of crypto wallets/ especially AI Api keys?

Now I am not worried about the Ai Api keys having much damage but I am thinking of one step further and I am not sure how many of these corporations follow privacy policy and so perhaps someone more experienced can tell me but wouldn't these applications keep logs for legal purposes and those logs can contain sensitive information, both of businesses but also, private individuals perhaps too?

daveguy | a day ago

Maybe then people will start to realize crypto isn't even worth the stored bits.

Irrevocable transfers... What could go wrong?

eoskx | a day ago

Not just as a gateway in a lot cases, but CrewAI and DSPy use it directly. DSPy uses it as its only way to call upstream LLM providers and CrewAI falls back to it if the OpenAI, Anthropic, etc. SDKs aren't available.

mikert89 | a day ago

Wow this is in a lot of software

eoskx | a day ago

Yep, DSPy and CrewAI have direct dependencies on it. DSPy uses it as its primary library for calling upstream LLM providers and CrewAI falls back to it I believe if the OpenAI, Anthropic, etc. SDKs aren't available.

Imustaskforhelp | a day ago

Our modern economy/software industry truly runs on egg-shells nowadays that engineers accounts are getting hacked to create a supply-chain attack all at the same time that threat actors are getting more advanced partially due to helps of LLM's.

First Trivy (which got compromised twice), now LiteLLM.

6thbit | a day ago

title is bit misleading.

The package was directly compromised, not “by supply chain attack”.

If you use the compromised package, your supply chain is compromised.

dlor | a day ago

It's both. They got compromised by another supply chain attack on Trivy initially.

intothemild | a day ago

I just installed Harbor, and it instantly pegged my cpu.. i was lucky to see my processes before the system hard locked.

Basically it forkbombed `grep -r rpcuser\rpcpassword` processes trying to find cryptowallets or something. I saw that they spawned from harness, and killed it.

Got lucky, no backdoor installed here from what i could make out of the binary

hmokiguess | a day ago

What is Harness?

intothemild | a day ago

Sorry i mean Harbor.. was running terminal bench

abhikul0 | a day ago

Same experience with browser-use, it installs litellm as a dependency. Rebooted mac as nothing was responding; luckily only github and huggingface tokens were saved in .git-credentials and have invalidated them. This was inside a conda env, should I reinstall my os for any potential backdoors?

abhikul0 | 13 hours ago

Well, I reinstalled and finally upgraded to Tahoe.

swyx | a day ago

> i was lucky to see my processes before the system hard locked.

how do you do that? have Activity Monitor up at all times?

krackers | a day ago

Probably iStat menus or something similar

chillfox | a day ago

Now I feel lucky that I switched to just using OpenRouter a year ago because LiteLLM was incredible flaky and kept causing outages.

gkfasdfasdf | a day ago

Someone needs to go to prison for this.

6thbit | a day ago

Worth exploring safeguard for some: The automatic import can be suppressed using Python interpreter’s -S option.

This would also disable site import so not viable generically for everyone without testing.

cpburns2009 | a day ago

The 1.82.7 exploit was executed on import. The 1.82.8 exploit used a pth file which is run at start up (module discovery basically).

zahlman | 20 hours ago

It's not really "automatic import", as described. The exploit is directly contained in the .pth file; Python allows arbitrary code to run from there, with some restrictions that are meant to enforce a bit of sanity for well-meaning users and which don't meaningfully mitigate the security risk.

As described in https://docs.python.org/3/library/site.html :

> Lines starting with import (followed by space or tab) are executed.... The primary intended purpose of executable lines is to make the corresponding module(s) importable (load 3rd-party import hooks, adjust PATH etc).

So what malware can do is put something in a .pth file like

  import sys;exec("evil stringified payload")
and all restrictions are trivially bypassed. It used to not even require whitespace after `import`, so you could even instead do something like

  import_=exec("evil stringified payload")
In the described attack, the imports are actually used; the standard library `subprocess` is leveraged to exec the payload in a separate Python process. Which, since it uses the same Python environment, is also a fork bomb (well, not in the traditional sense; it doesn't grow exponentially, but will still cause a problem).

.pth files have worked this way since 2.1 (comparing https://docs.python.org/2.1/lib/module-site.html to https://docs.python.org/2.0/lib/module-site.html). As far as I can tell there was no PEP for that change.

ramimac | a day ago

This is tied to the TeamPCP activity over the last few weeks. I've been responding, and keeping an up to date timeline. I hope it might help folks catch up and contextualize this incident:

https://ramimac.me/trivy-teampcp/#phase-09

miraculixx | a day ago

This is interesting. How do you keep this up to date so quickly?

ramimac | a day ago

Blood, sweat, and tears.

The investment compounds! I have enough context to quickly vet incoming information, then it's trivial to update a static site with a new blurb

itintheory | a day ago

Thanks for putting this together. I've been seeing the name TeamPCP pop up all over, but hadn't seen everything in one place.

ctmnt | 18 hours ago

This is fantastic, thank you. Your reporting has been great. But also, damn, the playlist.

0fflineuser | a day ago

I was running it (as a proxy) in my homelab with docker compose using the litellm/litellm:latest image https://hub.docker.com/layers/litellm/litellm/latest/images/... , I don't think this was compromised as it is from 6 months ago and I checked it is the version 1.77.

I guess I am lucky as I have watchtower automatically update all my containers to the latest image every morning if there are new versions.

I also just added it to my homelab this sunday, I guess that's good timing haha.

oncelearner | a day ago

That's a bad supply-chain attack, many folks use litellm as main gateway

rdevilla | a day ago

laughs smugly in vimscript

fratellobigio | a day ago

It's been quarantined on PyPI

cpburns2009 | a day ago

LiteLLM is now in quarantine on PyPI [1]. Looks like burning a recovery token was worth it.

[1]: https://pypi.org/project/litellm/

rdevilla | a day ago

It will only take one agent-led compromise to get some Claude-authored underhanded C into llvm or linux or something and then we will all finally need to reflect on trusting trust at last and forevermore.

MuteXR | a day ago

You know that people can already write backdoored code, right?

ipython | a day ago

But now you have compromise _at scale_. Before poor plebs like us had to artisinally craft every back door. Now we have a technology to automate that mundane exploitation process! Win!

MuteXR | a day ago

You still have a human who actually ends up reviewing the code, though. Now if the review was AI powered... (glances at openclaw)

dec0dedab0de | a day ago

Yeah, and they can write code with vulnerabilities by accident. But this is a new class of problem, where a known trusted contributor can accidentally allow a vulnerability that was added on purpose by the tooling.

Imustaskforhelp | a day ago

If that would happen, The worry I would have is of all the sensitive Government servers from all over the world which might be then exploited and the amount of damage which can be caused silently by such a threat actor or something like AWS/GCP/these massive hyperscalers which are also used by the governments around the globe at times.

The possibilities within a good threat could be catastrophic if we assume so, and if we assume nation-states to be interested in sponsoring hacking attacks (which many nations already do) to attack enemy nations/gain leverage. We are looking at damage within Trillions at that point.

But I would assume that Linux might be safe for now, it might be the most looked at code and its definitely something safe.

LLVM might be a bit more interesting as it might go a little unnoticed but hopefully people who are working at LLVM are well funded/have enough funding to take a look at everything carefully to not have such a slip up.

cozzyd | a day ago

The only way to be safe is to constantly change internal APIs so that LLMs are useless at kernel code

thr0w4w4y1337 | a day ago

To slightly rephrase a citation from Demobbed (2000) [1]:

The kernel is not just open source, it's a very fast-moving codebase. That's how we win all wars against AI-authored exploits. While the LLM trains on our internal APIs, we change the APIs — by hand. When the agent finally submits its pull request, it gets lost in unfamiliar header files and falls into a state of complete non-compilability. That is the point. That is our strategy.

1 - https://en.wikipedia.org/wiki/Demobbed_(2000_film)

vlovich123 | a day ago

Reflect in what way? The primary focus of that talk is that it’s possible to infect the binary of a compiler in a way that source analysis won’t reveal and the binary self replicates the vulnerability into other binaries it generates. Thankfully that particular problem was “solved” a while back [1] even if not yet implemented widely.

However, the broader idea of supply chain attacks remains challenging and AI doesn’t really matter in terms of how you should treat it. For example, the xz-utils back door in the build system to attack OpenSSH on many popular distros that patched it to depend on systemd predates AI and that’s just the attack we know about because it was caught. Maybe AI helps with scale of such attacks but I haven’t heard anyone propose any kind of solution that would actually improve reliability and robustness of everything.

[1] Fully Countering Trusting Trust through Diverse Double-Compiling https://arxiv.org/abs/1004.5534

cozzyd | a day ago

I believe the issue is if an exploit is somehow injected into AI training data such that the AI unwittingly produces it and the human who requested the code doesn't even know.

vlovich123 | a day ago

That’s a separate issue and specifically not what OP was describing. Also highly unlikely in practice unless you use a random LLM - the major LLM providers already have to deal with such things and they have decent techniques to deal with this problem afaik.
If you think the the major LLM providers have a way to filter malicious (or even bad or wrong) code from training data, I have a bridge to sell you.

vlovich123 | 13 hours ago

No not to filter it out of training but to make it difficult to have poisoned data have an outsized impact vs its relative appearance. So yes, you can poison a random thing only you have ever mentioned. It’s more difficult to poison the LLM to inject subtle vulnerabilities you influence in code user’s ask it to generate. Unless you somehow flood training data with such vulnerabilities but that’s also more detectable.

TLDR: what I said is only foolish if you take the absolute dumbest possible interpretation of it.

BoppreH | 6 hours ago

The proposed solution seems to rely on a trusted compiler that generates the exact same output, bit-for-bit, as the compiler-under-test would generate if it was not compromised. That seems useful only in very narrow cases.

vlovich123 | 5 hours ago

You have a trusted compiler you write in assembly or even machine code. You then compile a source code you trust using that compiler. That is then used for the bit for bit analysis against a different binary of the compiler you produced to catch the hidden vulnerability.

BoppreH | 4 hours ago

It's assumed that in this scenario you don't have access to a trusted compiler; if you do, then there's no problem.

And the thesis linked above seems to go beyond simply "use a trusted compiler to compile the next compiler". It involves deterministic compilation and comparing outputs, for example.

ting0 | a day ago

Stop scaring me.

You're right though. There's been talks of a big global hack attack for a while now.

Nothing is safe anymore. Keep everything private airgapped is the only way forward. But most of our private and personal data is in the cloud, and we have no control over it or the backups that these companies keep.

While LLMs unlock the opportunity to self-host and self-create your infrastructure, it also unleashes the world of pain that is coming our way.

downboots | 15 hours ago

How loud would it be?

TacticalCoder | 19 hours ago

The guys deterministically bootstrapping a simple compiler from a few hundred bytes, which then deterministically compiles a more powerful compiler and so on are on to something.

In the end we need fully deterministic, 100% verifiable, chains. From the tiny boostrapped beginning, to the final thing.

There are people working on these things. Both, in a way, "top-down" (bootstrapping a tiny compiler from a few hundred bytes) and "bottom-up" (a distro like Debian having 93% of all its packages being fully reproducible).

While most people are happy saying "there's nothing wrong with piping curl to bash", there are others that do understand what trusting trust is.

As a sidenote although not a kernel backdoor, Jia Tan's XZ backdoor in that rube-goldberg systemd "we modify your SSHD because we're systemd and so now SSHD's attack surface is immensely bigger" was a wake-up call.

And, sadly and scarily, that's only for one we know about.

I think we'll see much more of these cascading supply chains attack. I also think that, in the end, more people are going to realize that there are better ways to both design, build and ship software.

PunchyHamster | 10 hours ago

"we" ? "We" know. We just can't do much about people on LLM crack that will go around any and every quality step just to tell themselves the LLM made them x times more productive

nickvec | a day ago

Looks like all of the LiteLLM CEO’s public repos have been updated with the description “teampcp owns BerriAI” https://github.com/krrishdholakia

otabdeveloper4 | a day ago

LiteLLM is the second worst software project known to man. (First is LangChain. Third is OpenClaw.)

I'm sensing a pattern here, hmm.

nickvec | a day ago

Not familiar with LangChain besides at a surface level - what makes it the worst software project known to man?

eoskx | a day ago

LangChain at least has its own layer for upstream LLM provider calls, which means it isn't affected by this supply chain compromise. DSPy uses LiteLLM as its primary way to call OpenAI, etc. and CrewAI imports it, too, but I believe it prefers the vendor libraries directly before it falls back to LiteLLM.

otabdeveloper4 | 23 hours ago

You have to see it to believe it. Feel the vibes.

ting0 | 23 hours ago

LLMs recommend LiteLLM, so its popularity will only continue.

hahaddmmm12x | a day ago

[flagged]

dang | a day ago

Automated comments aren't allowed here. Please stop.

https://news.ycombinator.com/newsguidelines.html#generated

shay_ker | a day ago

A general question - how do frontier AI companies handle scenarios like this in their training data? If they train their models naively, then training data injection seems very possible and could make models silently pwn people.

Do the labs label code versions with an associated CVE to label them as compromised (telling the model what NOT to do)? Do they do adversarial RL environments to teach what's good/bad? I'm very curious since it's inevitable some pwned code ends up as training data no matter what.

Imustaskforhelp | a day ago

I am pretty sure that such measures aren't taken by AI companies, though I may be wrong.

alansaber | a day ago

The API/online model inference definitely runs through some kind of edge safeguarding models which could do this.

tomaskafka | a day ago

Everyone’s (well, except Anthropic, they seem to have preserved a bit of taste) approach is the more data the better, so the databases of stolen content (erm, models) are memorizing crap.

datadrivenangel | a day ago

This was a compromise of the library owners github acccounts apparently, so this is not a related scenario to dangerous code in the training data.

I assume most labs don't do anything to deal with this, and just hope that it gets trained out because better code should be better rewarded in theory?

Havoc | a day ago

By betting that it dilutes away and not worrying about it too much. Bit like dropping radioactive barrels into the deep ocean.

ting0 | 23 hours ago

Yeah, and that won't hold up for long. Just wait until some well resourced attacker replicates their exploit into tens of thousands of sources it knows will be scraped and included in the training set to bias the model to produce their vulnerable code. Only a matter of time.

kstenerud | a day ago

We need real sandboxing. Out-of-process sandboxing, not in-process. The attacks are only going to get worse.

That's why I'm building https://github.com/kstenerud/yoloai

xinayder | a day ago

When something like this happens, do security researchers instantly contact the hosting companies to suspend or block the domains used by the attackers?

redrove | a day ago

First line of defense is the git host and artifact host scrape the malware clean (in this case GitHub and Pypi).

Domains might get added to a list for things like 1.1.1.2 but as you can imagine that has much smaller coverage, not everyone uses something like this in their DNS infra.

itintheory | 23 hours ago

This threat actor is also using Internet Computer Protocol (ICP) "Canisters" to deliver payloads. I'm not too familiar with the project, but I'm not sure blocking domains in DNS would help there.

dec0dedab0de | a day ago

github, pypi, npm, homebrew, cpan, etc etc. should adopt a multi-multi-factor authentication approach for releases. Maybe have it kick in as a requirement after X amount of monthly downloads.

Basically, have all releases require multi-factor auth from more than one person before they go live.

A single person being compromised either technically, or by being hit on the head with a wrench, should not be able to release something malicious that effects so many people.

worksonmine | a day ago

And how would that work for single maintainer projects?

dec0dedab0de | a day ago

They would have to find someone else if they grew too big.

Though, the secondary doesn't necessarily have to be a maintainer or even a contributor on the project. It just needs to be someone else to do a sanity check, to make sure it is an actual release.

Heck, I would even say that as the project grows in popularity, the amount of people required to approve a release should go up.

worksonmine | a day ago

So if I'm developing something I want to use and the community finds it useful but I take no contributions and no feature requests I should have to find another person to deal with?

How do I even know who to trust, and what prevents two people from conspiring together with a long con? Sounds great on the surface but I'm not sure you've thought it through.

dec0dedab0de | a day ago

It wouldn't prevent a project that has a goal of being purposely malicious, just from pushing out releases that aren't actually releases.

As far as who to trust, I could imagine the maintainers of different high-level projects helping each other out in this way.

Though, if you really must allow a single user to publish releases to the masses using existing shared social infrastructure. Then you could mitigate this type of attack by adding in a time delay, with the ability for users to flag. So instead of immediately going live, add in a release date, maybe even force them to mention the release date on an external system as well. The downside with that approach is that it would limit the ability to push out fixes as well.

But I think I am OK with saying if you're a solo developer, you need to bring someone else on board or host your builds yourself.

worksonmine | a day ago

Or just don't install every package on the earth. The only supply-chain attack I've been affected by is xz, and I don't think anyone was safe from that one. Your solution wouldn't have caught it.

Better to enforce good security standards than cripple the ecosystem.

vikarti | 13 hours ago

Why not make it _optional_ but implement on github,etc so any publisher could enable this, no matter how small. But also make it possibel to disable either by support request and small wait or by secondary confirmation or via LONG (months) wait.

cpburns2009 | 5 hours ago

I really hoped PyPI's required switch to 2-factor auth would require reauthorization to publish packages. But no, they went with "trusted publishing" (i.e., publishing is triggered by CI, and will happily publish a compromized repo). Trusted publishing would only have been a minor hindrance to the litellm exploit. Since they acquired an account's personal access token, the exploit could have been committed to the repo and the package published.

0123456789ABCDE | a day ago

airflow, dagster, dspy, unsloth.ai, polar

eoskx | a day ago

This is bad, especially from a downstream dependency perspective. DSPy and CrewAI also import LiteLLM, so you could not be using LiteLLM as a gateway, but still importing it via those libraries for agents, etc.

nickvec | a day ago

Wow, the postmortem for this is going to be brutal. I wonder just how many people/orgs have been affected.

eoskx | a day ago

Yep, I think the worst impact is going to be from libraries that were using LiteLLM as just an upstream LLM provider library vs for a model gateway. Hopefully, CrewAI and DSPy can get on top of it soon.

benatkin | a day ago

I'm surprised to see nanobot uses LiteLLM: https://github.com/HKUDS/nanobot

LiteLLM wouldn't be my top choice, because it installs a lot of extra stuff. https://news.ycombinator.com/item?id=43646438 But it's quite popular.

flux3125 | a day ago

I completely removed nanobot after I found that. Luckily, I only used it a few times and inside a docker container. litellm 1.82.6 was the latest version I could find installed, not sure if it was affected.

xunairah | a day ago

Version 1.82.7 is also compromised. It doesn't have the pth file, but the payload is still in proxy/proxy_server.py.

tom_alexander | a day ago

Only tangentially related: Is there some joke/meme I'm not aware of? The github comment thread is flooded with identical comments like "Thanks, that helped!", "Thanks for the tip!", and "This was the answer I was looking for."

Since they all seem positive, it doesn't seem like an attack but I thought the general etiquette for github issues was to use the emoji reactions to show support so the comment thread only contains substantive comments.

nickvec | a day ago

Ton of compromised accounts spamming the GH thread to prevent any substantive conversation from being had.

tom_alexander | a day ago

Oh wow. That's a lot of compromised accounts. Guess I was wrong about it not being an attack.

incognito124 | a day ago

In the thread:

> It also seems that attacker is trying to stifle the discussion by spamming this with hundreds of comments. I recommend talking on hackernews if that might be the case.

jbkkd | a day ago

Those are all bots commenting, and now exposing themselves as such.

Imustaskforhelp | a day ago

Bots to flood the discussion to prevent any actual conversation.

vultour | a day ago

These have been popping up on all the TeamPCP compromises lately

jFriedensreich | a day ago

We just can't trust dependencies and dev setups. I wanted to say "anymore" but we never could. Dev containers were never good enough, too clumsy and too little isolation. We need to start working in full sandboxes with defence in depth that have real guardrails and UIs like vm isolation + container primitives and allow lists, egress filters, seccomp, gvisor and more but with much better usability. Its the same requirements we have for agent runtimes, lets use this momentum to make our dev environments safer! In such an environment the container would crash, we see the violations, delete it and dont' have to worry about it. We should treat this as an everyday possibility not as an isolated security incident.

kalib_tweli | a day ago

Would value your opinion on my project to isolate creds from the container:

https://github.com/calebfaruki/tightbeam https://github.com/calebfaruki/airlock

This is literally the thing I'm trying to protect against.

jFriedensreich | a day ago

I would split the agent loop totally from the main project of tightbeam, no one wants yet another new agent harness we need to focus on the operational problems. Airlock seems interesting in theory but its really hard to believe this could capture every single behaviour of the native local binaries, we need the native tools with native behaviour otherwise might as well use something like MCP. I would bet more on a git protocol proxy and native solutions for each of these.

binsquare | a day ago

So... I'm working on an open source technology to make a literal virtual machine shippable i.e. freezing everything inside it, isolated due to vm/hypervisor for sandboxing, with support for containers too since it's a real linux vm.

The problems you mentioned resonated a lot with me and why I'm building it, any interest in working to solve that together?: https://github.com/smol-machines/smolvm

vladvasiliu | a day ago

What would the advantage of this be compared to using something like a Firecracker backend for containerd?

jFriedensreich | a day ago

firecracker does not run on macos and has no GPU support

binsquare | a day ago

Run locally on macs, much easier to install/use, and designed to be "portable" meaning you can package a VM to preserve statefulness and run it somewhere else.

worked in AWS and specifically with firecracker in the container space for 4 years - we had a very long onboarding doc to dev on firecracker for containers... So I made sure to focus on ease of use here.

Bengalilol | a day ago

Probably on the side of your project, but did you try SmolBSD? <https://smolbsd.org> It's a meta-OS for microVMs that boots in 10–15 ms.

It can be dedicated to a single service (or a full OS), runs a real BSD kernel, and provides strong isolation.

Overall, it fits into the "VM is the new container" vision.

Disclaimer: I'm following iMil through his twitch streams (the developer of smolBSD and a contributor to NetBSD) and I truly love what he his doing. I haven't actually used smolBSD in production myself since I don't have a need for it (but I participated in his live streams by installing and running his previews), and my answer might be somewhat off-topic.

More here <https://hn.algolia.com/?q=smolbsd>

binsquare | a day ago

First time hearing about it, thanks for sharing!

At a glance, it's a matter of compatibility, most software has first class support for linux. But very interesting work and I'm going to follow it closely

jFriedensreich | a day ago

Thanks for the pointer! Love the premise project. Just a few notes:

- a security focused project should NOT default to train people installing by piping to bash. If i try previewing the install script in the browser it forces download instead of showing as plain text. The first thing i see is an argument

# --prefix DIR Install to DIR (default: ~/.smolvm)

that later in the script is rm -rf deleting a lib folder. So if i accidentally pick a folder with ANY lib folder this will be deleted.

- Im not sure what the comparison to colima with krunkit machines is except you don't use vm images but how this works or how it is better is not 100% clear

- Just a minor thing but people don't have much attention and i just saw aws and fly.io in the description and nearly closed the project. it needs to be simpler to see this is a local sandbox with libkrun NOT a wrapper for a remote sandbox like so many of the projects out there.

Will try reaching you on some channel, would love to collaborate especially on devX, i would be very interested in something more reliable and bit more lightweight in placce of colima when libkrun can fully replace vz

binsquare | a day ago

Love this feedback, agree with you completely on all of it - I'll be making those changes.

1. In comparison with colima with krunkit, I ship smolvm with custom built kernel + rootfs, with a focus on the virtual machine as opposed to running containers (though I enable running containers inside it).

The customizations are also opensource here: https://github.com/smol-machines/libkrunfw

2. Good call on that description!

I've reached out to you on linkedin

dist-epoch | a day ago

What is the alternative to bash piping? If you don't trust the project install script, why would you trust the project itself? You can put malware in either.

wang_li | a day ago

It turns out that it's possible for the server to detect whether it is running via "| bash" or if it's just being downloaded. Inspecting it via download and then running that specific download is safer than sending it directly to bash, even if you download it and inspect it before redownloading it and piping it to a shell.

dist-epoch | a day ago

The server can also put malware in the .tar.gz. Are you really checking all the files in there, even the binaries? If you don't what's the point of checking only the install script?

TacticalCoder | 19 hours ago

> If you don't what's the point of checking only the install script?

The .tar.gz can be checksummed and saved (to be sure later on that you install the same .tar.gz and to be sure it's still got the same checksum). Piping to Bash in one go not so much. Once you intercept the .tar.gz, you can both reproduce the exploit if there's any (it's too late for the exploit to hide: you've got the .tar.gz and you may have saved it already to an append-only system, for example) and you can verify the checksum of the .tar.gz with other people.

The point of doing all these verifications is not only to not get an exploit: it's also to be able to reproduce an exploit if there's one.

There's a reason, say, packages in Debian are nearly all both reproducible and signed.

And there's a reason they're not shipped with piping to bash.

Other projects shall offer an install script that downloads a file but verifies its checksum. That's the case of the Clojure installer for example: if verifies the .jar. Now I know what you're going to say: "but the .jar could be backdoored if the site got hacked, for both the checksum in the script and the .jar could have been modified". Yes. But it's also signed with GPG. And I do religiously verify that the "file inside the script" does have a valid signature when it has one. And if suddenly the signing key changed, this rings alarms bells.

Why settle for the lowest common denominator security-wise? Because Anthropic (I pay my subscription btw) gives a very bad example and relies entirety on the security of its website and pipes to Bash? This is high-level suckage. A company should know better and should sign the files it ships and not encourage lame practices.

Once again: all these projects that suck security-wise are systematically built on the shoulders of giants (like Debian) who know what they're doing and who are taking security seriously.

This "malware exists so piping to bash is cromulent" mindset really needs to die. That mentality is the reason we get major security exploits daily.

dist-epoch | 7 hours ago

> And I do religiously verify that the "file inside the script" does have a valid signature when it has one.

If you want to go down this route, there is no need to reinvent the wheel. You can add custom repositories to apt/..., you only need to do this once and verify the repo key, and then you get this automatic verification and installation infrastructure. Of course, not every project has one.

pabs3 | 15 hours ago

> Are you really checking all the files in there, even the binaries?

One should never trust the binaries, always build them from source, all the way down to the bootloader.

https://bootstrappable.org/

Checking all the files is really the only way to deal with potential malware, or even security vulns.

https://github.com/crev-dev/

dist-epoch | 7 hours ago

Nice ideal, but Chrome/Firefox would take days to build on your average laptop (if it doesn't run out of memory first).

pabs3 | 4 hours ago

The latest Firefox build that Debian did only took just over one hour on amd64/armhf and 1.5 hours on ppc64el, the slowest Debian architecture is riscv64 and the last successful build there took only 17.5h, so definitely not days. Your average modern developer-class laptop is going to take a lot less than riscv64 too.

jFriedensreich | a day ago

That assumes you even need an install script. 90% of install scripts just check the platform and make the binary executable and put it in the right place. Just give me links to a github release page with immutable releases enabled and pure binaries. I download the binary but it in a temporary folder, run it with a seatbelt profile that logs what it does. Binaries should "just run" and at most access one folder in a place they show you and that is configurable! Fuck installers.

fsflover | a day ago

It looks like you may be interested in Qubes OS, https://qubes-os.org.

fsflover | 22 hours ago

amelius | a day ago

We need programming languages where every imported module is in its own sandbox by default.

jFriedensreich | a day ago

We have one where thats possible: workerd (apache 2.0) no new language needed just a new runtime

amelius | a day ago

I mean, the sandboxing aspect of a language is just one thing.

We should have sandboxing in Rust, Python, and every language in between.

jerf | a day ago

Now is probably a pretty good time to start a capabilities-based language if someone is able to do that. I wish I had the time.

saidnooneever | a day ago

just sandbox the interpreter (in this case), package manager and binaries.

u can run in chroot jail and it wouldnt have accessed ssh keys outside of the jail...

theres many more similar technologies aleady existing, for decades.

doing it on a per language basis is not ideal. any new language would have to reinvent the wheel.

better to do it at system level. with the already existing tooling.

openbsd has plege/unveil, linux chroot, namespaces, cgroups, freebsd capsicum or w/e. theres many of these things.

(i am not sure how well they play within these scenarios, but just triggering on the sandboxing comment. theres plenty of ways to do it as far as i can tell...)

amelius | a day ago

What if I wanted to write a program that uses untrusted libraries, but also does some very security sensitive stuff? You are probably going to suggest splitting the program into microservices. But that has a lot of problems and makes things slow.

The problem is that programs can be entire systems, so "doing it at the system level" still means that you'd have to build boundaries inside a program.

saidnooneever | a day ago

you can do multi process things. or drop privs when using untrusted things.

you can use OS apis to isolate the thing u want to use just fine..

and yes, if you mix privilege levels in a program by design then u will have to design your program for that.

this is simple logic.

a programming language can not decide for you who and what you trust.

amelius | a day ago

> you can use OS apis to isolate the thing u want to use just fine..

For the sake of the argument, what if I wanted to isolate numpy from scipy?

Would you run numpy in a separate process from scipy? How would you share data between them?

Yes, you __can__ do all of that without programming language support. However, language support can make it much easier.

mike_hearn | a day ago

Java had that from v1.2 in the 1990s. It got pulled out because nobody used it. The problem of how to make this usable by developers is very hard, although maybe LLMs change the equation.

staticassertion | a day ago

In frontend-land you can sort of do this by loading dependencies in iframe sandboxes. In backend, ur fucked.

codethief | 10 hours ago

Or just make side effects explicit in the type system through monads or algebraic effects.

wswin | a day ago

Containers prevent this kind of info stealing greatly, only explicitly provided creds would be leaked.

jFriedensreich | a day ago

Containers can mean many things, if you mean plain docker default configured containers then no, they are a packaging mechanism not safe environment by themselves.

wswin | a day ago

They don't have access to the host filesystem nor environment variables and this attack wouldn't work.

jFriedensreich | a day ago

Just because this attack example did not contain container escape exploits does not mean this is safe. Its better than nothing but nothing that will save us.

lyu07282 | 12 hours ago

Those supply chain attacks we are seeing are bad, but if someone burns a 0day container escape for it, it would probably be a net positive effect on security overall. Just saying this is FUD.

cedws | a day ago

This is the security shortcuts of the past 50 years coming back to bite us. Software has historically been a world where we all just trust each other. I think that’s coming to an end very soon. We need sandboxing for sure, but it’s much bigger than that. Entire security models need to be rethought.

1313ed01 | a day ago

This assumes that we can get a locked down, secure, stable bedrock system and sandbox that basically never changes except for tiny security updates that can be carefully inspected by many independent parties.

Which sounds great, but the way things work now tend to be the exact opposite of that, so there will be no trustable platform to run the untrusted code in. If the sandbox, or the operating system the sandbox runs in, will get breaking changes and force everyone to always be on a recent release (or worse, track main branch) then that will still be a huge supply chain risk in itself.

dotancohen | a day ago

  > This assumes that we can get a locked down, secure, stable bedrock system and sandbox that basically never changes except for tiny security updates that can be carefully inspected by many independent parties.
For the most part you can. Just version pin slightly-stale versions of dependencies, after ensuring there are no known exploits for that version. Avoid the latest updates whenever possible. And keep aware of security updates, and affected versions.

Don't just update every time the dependency project updates. Update specifically for security issues, new features, and specific performance benefits. And even then avoid the latest version when possible.

1313ed01 | a day ago

Sure, and that is basically what sane people do now, but that only works until something needs a security patch that was not provided for the old version, and changing one dependency is likely to cascade so now I am open to supply chain attacks in many dependencies again (even if briefly).

To really run code without trust would need something more like a microkernel that is the only thing in my system I have to trust, and everything running on top of that is forced to behave and isolated from everything else. Ideally a kernel so small and popular and rarely modified that it can be well tested and trusted.

dist-epoch | a day ago

Virtual machines are that - tiny surfaces to access the host system (block disk device, ...). Which is why virtual machine escape vulnerabilities are quite rare.

bilbo0s | 23 hours ago

I feel like in some cases we should be using virtual machines. Especially in domains where risk is non-trivial.

How do you change developer and user habits though? It's not as easy as people think.

aftbit | a day ago

The secure boot "shim" is a project like this. Perhaps we need more core projects that can be simple and small enough to reach a "finished" state where they are unlikely to need future upgrades for any reason. Formal verification could help with this ... maybe.

https://wiki.debian.org/SecureBoot#Shim

wang_li | a day ago

>Which sounds great, but the way things work now tend to be the exact opposite of that, so there will be no trustable platform to run the untrusted code in.

This is the problem with software progressivism. Some things really should just be what they are, you fix bugs and security issues and you don't constantly add features. Instead everyone is trying to make everything have every feature. Constantly fiddling around in the guts of stuff and constantly adding new bugs and security problems.

ashishb | 16 hours ago

> This assumes that we can get a locked down, secure, stable bedrock system and sandbox that basically never changes except for tiny security updates that can be carefully inspected by many independent parties.

Not really. You should limit the attack surface for third-party code.

A linter running in `dir1` should not access anything outside `dir1`.

pabs3 | 15 hours ago

I think Bootstrappable Builds from source without any binaries, plus distributed code audits would do a better job than locking down already existing binaries.

https://bootstrappable.org/ https://github.com/crev-dev/

georgestrakhov | a day ago

I've been thinking the same thing. And it's somewhat parallel to what happened to meditation vs. drugs. In the old world the dangerous insights required so many years of discipline that you could sort of trust that the person getting the insight would be ok. But then any idiot can get the insight by just eating some shrooms and oops, that's a problem. Mostly self-harm problem in that case. But the dynamic is somewhat similar to what's happening now with LLMs and coding.

Software people could (mostly) trust each other's OSS contributions because we could trust the discipline it took in the first place. Not any more.

dec0dedab0de | a day ago

In the old world the dangerous insights required so many years of discipline that you could sort of trust that the person getting the insight would be ok. But then any idiot can get the insight by just eating some shrooms and oops, that's a problem.

I would think humans have been using psychedelics since before we figured out meditation. Likely even before we were humans.

greazy | 11 hours ago

Ah yes the stoned ape hypothesis. I don't know if there is or will ever be evidence to support the hypothesis.

I also like the drunk monkey hypothesis.

KoftaBob | a day ago

What in the world are “the dangerous insights”?

garthk | 21 hours ago

“Society is a construct”, for starters?

otabdeveloper4 | 5 hours ago

That's babby's first insight. Most people figure this out on their own in kindergarten.

AlexCoventry | a day ago

Supply-chain attacks long pre-date effective AI agentic coding, FWIW.

klibertp | a day ago

The NIH syndrome becoming best practice (a commenter below already says they "vibe-coded replacements for many dependencies") would also save quite a few jobs, I suspect. Fun times.

ting0 | a day ago

I've been doing that too. The downside is it's a lot of work for big replacements.

ting0 | a day ago

What we need is accountability and ties to real-world identity.

If you're compromised, you're burned forever in the ledger. It's the only way a trust model can work.

The threat of being forever tainted is enough to make people more cautious, and attackers will have no way to pull off attacks unless they steal identities of powerful nodes.

Like, it shouldn't be a thing that some large open-source project has some 4th layer nested dependency made by some anonymous developer with 10 stars on Github.

If instead, the dependency chain had to be tied to real verified actors, you know there's something at stake for them to be malicious. It makes attacks much less likely. There's repercussions, reputation damage, etc.

MetaWhirledPeas | a day ago

> real-world identity

This bit sounds like dystopian governance, antithetical to most open source philosophies.

2OEH8eoCRo0 | 23 hours ago

Would you drive on bridges or ride in elevators "inspected" by anons? Why are our standards for digital infrastructure and software "engineering" so low?

I don't blame the anons but the people blindly pulling in anon dependencies. The anons don't owe us anything.

MetaWhirledPeas | 23 hours ago

This option is available already in the form of closed-source proprietary software.

If someone wants a package manager where all projects mandate verifiable ID that's fine, but I don't see that getting many contributors. And I also don't see that stopping people using fraudulent IDs.

mastermage | 20 hours ago

Do you know who inspected a bridge before you drive over it?

pamcake | 13 hours ago

A business or government can (should) separately package, review, and audit code without involving upstream developers or maintainers at all.

post-it | 23 hours ago

> The threat of being forever tainted is enough to make people more cautious

No it's not. The blame game was very popular in the Eastern Block and it resulted in a stagnant society where lots of things went wrong anyway. For instance, Chernobyl.

KronisLV | 20 hours ago

> What we need is accountability and ties to real-world identity.

Who's gonna enforce that?

> If you're compromised, you're burned forever in the ledger.

Guess we can't use XZ utils anymore cause Lasse Collin got pwned.

Also can't use Chalk, debug, ansi-styles, strip-ansi, supports-color, color-convert and others due to Josh Junon also ending up a victim.

Same with ua-parser-js and Faisal Salman.

Same with event-stream and Dominic Tarr.

Same with the 2018 ESLint hack.

Same with everyone affected by Shai-Hulud.

Hell, at that point some might go out of their way to get people they don't like burned.

At the same time, I think that stopping reliance on package managers that move fast and break things and instead making OS maintainers review every package and include them in distros would make more sense. Of course, that might also be absolutely insane (that's how you get an ecosystem that's from 2 months to 2 years behind the upstream packages) and take 10x more work, but with all of these compromises, I'd probably take that and old packages with security patches, instead of pulling random shit with npm or pip or whatever.

Though having some sort of a ledger of bad actors (instead of people who just fuck up) might also be nice, if a bit impossible to create - because in the current day world that's potentially every person that you don't know and can't validate is actually sending you patches (instead of someone impersonating them), or anyone with motivations that aren't clear to you, especially in the case of various "helpful" Jia Tans.

anthk | 20 hours ago

There is no need of that bullshit. Guix can just set an isolated container in seconds not touching your $HOME at all and importing all the Python/NPM/Whatever dependencies in the spot.

hannahoppla | 10 hours ago

Accountability is on the people using a billion third party dependencies, you need to take responsibility for every line of code you use in your project.

encomiast | 9 hours ago

If you are really talking about dependencies, I’m not sure you’ve really thought this all the way through. Are you inspecting every line of the Python interpreter and its dependencies before running? Are you reading the compiler that built the Python interpreter?

autoexec | 10 minutes ago

It's still smart to limit the amount of code (and coders) you have to trust. A large project like Python should be making sure it's dependencies are safe before each release. In our own projects we'd probably be better off taking just the code we need from a library, verifying it (at least to the extent of looking for something as suspect as a random block of base64 encoded data) and copying it into our projects directly rather than adding a ton of external dependencies and every last one of the dependencies they pull in and then just hoping that nobody anywhere in that chain gets compromised.

dotancohen | a day ago

  > We just can't trust dependencies and dev setups.

In one of my vibe coded personal projects (Python and Rust project) I'm actually getting rid of most dependencies and vibe coding replacements that do just what I need. I think that we'll see far fewer dependencies in future projects.

Also, I typically only update dependencies when either an exploit is known in the current version or I need a feature present in a later version - and even then not to the absolute latest version if possible. I do this for all my projects under the many eyes principal. Finding exploits takes time, new updates are riskier than slightly-stale versions.

Though, if I'm filing a bug with a project, I do test and file against the latest version.

adw | a day ago

> In one of my vibe coded personal projects (Python and Rust project) I'm actually getting rid of most dependencies and vibe coding replacements that do just what I need. I think that we'll see far fewer dependencies in future projects.

No free lunch. LLMs are capable of writing exploitable code and you don’t get notifications (in the eg Dependabot sense, though it has its own problems) without audits.

dotancohen | a day ago

My vibe coded personal projects don't have the source code available for attackers to target specifically.

heavyset_go | a day ago

You don't need open source access to be exploitable or exploited

nimih | a day ago

It might surprise you to learn that a large number of software exploits are written without the attacker having direct access to the program's source code. In fact, shocking as it may seem today, huge numbers of computers running the Windows operating system and Internet Explorer were compromised without the attackers ever having access to the source code of either.

sersi | 10 hours ago

I'm actually curious if the windows source code leak of 2004 increased the number of exploits against windows? I'm not sure if it included internet explorer. I remember that windows 2000 was included back then.

uyzstvqs | a day ago

That's no solution. If you can't trust and/or verify dependencies, and they are malicious, then you have bigger problems than what a sandbox will protect against. Even if it's sandboxed and your host machine is safe, you're presumably still going to use that malicious code in production.

nazcan | a day ago

I'm supportive of going further - like restricting what a library is able to do. e.g. if you are using some library to compute a hash, it should not make network calls. Without sub-processes, it would require OS support.

fn-mote | a day ago

Which exists: pledge in OpenBSD.

Making this work on a per-library level … seems a lot harder. The cost for being very paranoid is a lot of processes right now.

lanstin | a day ago

It's a language/compiler/function call stack feature, not existing as far as I know, but it would be awesome - the caller of a function would specify what resources/syscalls could be made, and anything down the chain would be thusly restricted. The library could try to do its phone home stats and it would fail. Couldn't be C or a C type language runtime, or anything that can call to assembly of course. @compute_only decorator. Maybe could be implemented as a sys-call for a thread - thread_capability_remove(F_NETWORK + F_DISK)? Wouldn't be able to schedule any work on any thread in that case, but Go could have pools of threads for coroutines with varying capabilities. Something to put the developer back in charge of the mountain of dependencies we are all forced to manage now.

azornathogron | 5 hours ago

In type system theory I think what you're looking for is "effect systems".

You make the type system statically encode categories of side-effects, so you can tell from the type of a function whether it is pure computation, or if not what other things it might do. Exactly what categories of side-effect are visible this way depends on the type system; some are more expressive than others.

But it means when you use a hash function you can know that it's, eg, only reading memory you gave it access to and doing some pure computation on it.

exyi | a day ago

Except that LiteLLM probably got pwned because they used Trivy in CI. If Trivy ran in a proper sandbox, the compromised job could not publish a compromised package.

(Yes, they should better configure which CI job has which permissions, but this should be the default or it won't always happen)

staticassertion | a day ago

That's exactly what a sandbox is designed for. I think you're overly constraining your view of what sort of sandboxing can exist. You can, for example, sandbox code such that it can't do anything but read/write to a specific segment of memory.

dist-epoch | a day ago

This stuff already exists - mobile phone sandboxed applications with intents (allow Pictures access, ...)

But mention that on HN and watch getting downvoted into oblivion: the war against general computation, walled gardens, locked down against device owners...

jFriedensreich | a day ago

You are not being downvoted because the core premise is wrong but because your framing as a choice between being locked out of general purpose computing vs security is repeating the brainwashing companies like apple and meta do to justify their rent-seeking locking out out of competitors and user agency. We have all the tools to build safe systems that don't require up front manifest declaration and app store review by the lord but give tools for control, dials and visibility to the users themselves in the moment. And yes, many of these UIs might look like intent sheets. The difference is who ultimately controls how these Interfaces look and behave.

MiddleEndian | 18 hours ago

You can have both. Bazzite Linux lets you sandbox applications and also control your own device.

udave | a day ago

strongly agree. we keep giving away trust to other entities in order to make our jobs easier. trusting maintainers is still better than trusting a clanker but still risky. We need a sandboxed environment where we can build our software without having to worry about these unreliable factors.

On a personal note, I have been developing and talking to a clanker ( runs inside ) to get my day to day work done. I can have multiple instances of my project using worktrees, have them share some common dependencies and monitor all of them in one place. I plan to opensource this framework soon.

fulafel | a day ago

> In such an environment the container would crash, we see the violations, delete it and dont' have to worry about it.

This is the interesting part. What kind of UI or other mechanisms would help here? There's no silver bullet for detecting and crashing on "something bad". The adversary can test against your sandbox as well.

poemxo | a day ago

"Anymore" is right though. This should be a call to change the global mindset regarding dependencies. We have to realize that the "good ol days" are behind us in order to take action.

Otherwise people will naysay and detract from the cause. "It worked before" they will say. "Why don't we do it like before?"

DISA STIG already forbids use of the EPEL for Red Hat Enterprise Linux. Enterprise software install instructions are littered with commands to turn off gpgcheck and install rpm's from sourceforge. The times are changing and we need cryptographically verifiable guarantees of safety!

miraculixx | a day ago

I agree in general, but how are you ever upgrading any of that? Could be a "sleeper compromise" that only activates sometime in the future. Open problem.

jFriedensreich | a day ago

A sleeper compromise that cannot execute can still not reach its goal. Generally speaking outdated dependencies without known compromise in a sandbox are still better than the latest deps with or without sandbox.

Andrei_dev | a day ago

Sandboxes yes, but who even added the dependency? Half the projects I see have requirements.txt written by Copilot. AI says "add litellm", dev clicks accept, nobody even pins versions.

Then we talk about containment like anyone actually looked at that dep list.

Aurornis | 23 hours ago

> Dev containers were never good enough, too clumsy and too little isolation.

I haven't kept up with the recent exploits, so a side question: Have any of the recent supply chain attacks or related exploits included any escapes from basic dev containers?

AbanoubRodolf | 17 hours ago

Defense-in-depth on dev machines is useful but doesn't address the actual attack path here. The credential that was stolen lived in CI, not on a dev laptop — Trivy ran with PyPI publisher permissions because that's standard practice for "scanner before publish."

The harder problem is that CI pipelines routinely grant scanner processes more credential access than they need. Trivy needed read access to the repo and container layers; it didn't need PyPI publish tokens. Scoping CI secrets to the minimum necessary operation, and injecting them only for the specific job that needs them rather than the entire pipeline, would have contained the blast radius here.

ashishb | 16 hours ago

> We need to start working in full sandboxes with defence in depth that have real guardrails

Happily sandboxing almost all third-party tools since 2025. `npm run dev` does not need access to my full disk.

pjc50 | 8 hours ago

The trouble with sandboxing is that eventually everything you want to access ends up inside the sandbox. Otherwise the friction is infuriating.

I see people going in the opposite direction with "dump everything in front of my army of LLMs" setups. Horribly insecure, but gotta go fast, right?

mohsen1 | a day ago

If it was not spinning so many Python processes and not overwhelming the system with those (friends found out this is consuming too much CPU from the fan noise!) it would have been much more successful. So similar to xz attack

it does a lot of CPU intensive work

    spawn background python
    decode embedded stage
    run inner collector
    if data collected:
        write attacker public key
        generate random AES key
        encrypt stolen data with AES
        encrypt AES key with attacker RSA pubkey
        tar both encrypted files
        POST archive to remote host

franktankbank | a day ago

I can't tell which part of that is expensive unless many multiples of python are spawned at the same time. Are any of the payloads particularly large?

detente18 | a day ago

LiteLLM maintainer here, this is still an evolving situation, but here's what we know so far:

1. Looks like this originated from the trivvy used in our ci/cd - https://github.com/search?q=repo%3ABerriAI%2Flitellm%20trivy... https://ramimac.me/trivy-teampcp/#phase-09

2. If you're on the proxy docker, you were not impacted. We pin our versions in the requirements.txt

3. The package is in quarantine on pypi - this blocks all downloads.

We are investigating the issue, and seeing how we can harden things. I'm sorry for this.

- Krrish

redrove | a day ago

>1. Looks like this originated from the trivvy used in our ci/cd

Were you not aware of this in the short time frame that it happened in? How come credentials were not rotated to mitigate the trivy compromise?

wheelerwj | a day ago

The latest trivy attack was announced just yesterday. If you go out to dinner or take a night off its totally plausible to have not seen it.

anishgupta | 15 hours ago

afaik the trivy attack was first in the news on March 19th for the github actions and for docker images it was on March 23rd

Imustaskforhelp | a day ago

> - Krrish

Was your account completely compromised? (Judging from the commit made by TeamPCP on your accounts)

Are you in contacts with all the projects which use litellm downstream and if they are safe or not (I am assuming not)

I am unable to understand how it compromised your account itself from the exploit at trivvy being used in CI/CD as well.

redrove | a day ago

>I am unable to understand how it compromised your account itself from the exploit at trivvy being used in CI/CD as well.

Token in CI could've been way too broad.

franktankbank | a day ago

He would have to state he didn't in fact make all those commits and close the issue.

detente18 | a day ago

It was the PYPI_PUBLISH token which was in our github project as an env var, that got sent to trivvy.

We have deleted all our pypi publishing tokens.

Our accounts had 2fa, so it's a bad token here.

We're reviewing our accounts, to see how we can make it more secure (trusted publishing via jwt tokens, move to a different pypi account, etc.).

redrove | a day ago

How did PYPI_PUBLISH lead to a full GH account takeover?

franktankbank | a day ago

Don't hold your breath for an answer.

ezekg | a day ago

I'd imagine the attacker published a new compromised version of their package, which the author eventually downloaded, which pwned everything else.

chunky1994 | a day ago

Their Personal Access Token must’ve been pwned too, not sure through what mechanism though

Imustaskforhelp | a day ago

They have written about it on github to my question:

Trivvy hacked (https://www.aquasec.com/blog/trivy-supply-chain-attack-what-...) -> all circleci credentials leaked -> included pypi publish token + github pat -> | WE DISCOVER ISSUE | -> pypi token deleted, github pat deleted + account removed from org access, trivvy pinned to last known safe version (v0.69.3)

What we're doing now:

    Block all releases, until we have completed our scans
    Working with Google's mandiant.security team to understand scope of impact
    Reviewing / rotating any leaked credentials
https://github.com/BerriAI/litellm/issues/24518#issuecomment...

franktankbank | a day ago

Does that explain how circleci was publishing commits and closing issues?

celticninja | a day ago

69.3 isnt safe. The safe thing to do is remove all trivy access. or failing that version. 0.35 is the last and AFAIK only safe version.

https://socket.dev/blog/trivy-under-attack-again-github-acti...

Imustaskforhelp | a day ago

I have sent your message to the developer on github and they have changed the version to 0.35.0 ,so thanks.

https://github.com/BerriAI/litellm/issues/24518#issuecomment...

mike_hearn | a day ago

Perhaps it's too obvious but ... just running the publish process locally, instead of from CI, would help. Especially if you publish from a dedicated user on a Mac where the system keychain is pretty secure.

staticassertion | a day ago

I'm not sure how. Their local system seems just as likely to get compromised through a `pip install` or whatever else.

In CI they could easily have moved `trivy` to its own dedicated worker that had no access to the PYPI secret, which should be isolated to the publish command and only the publish command.

mike_hearn | a day ago

User isolation works, the keychain isolation works. On macOS tokens stored in the keychain can be made readable only by specific apps, not anything else. It does require a bit of infrastructure - ideally a Mac app that does the release - but nothing you can't vibe code quickly.

staticassertion | a day ago

That's true, but it seems far more complex than just moving trivy to a separate workerflow with no permissions and likely physical isolation between it and a credential. I'm pretty wary of the idea that malware couldn't just privesc - it's pretty trivial to obtain root on a user's laptop. Running as a separate, unprivileged user helps a ton, but again, I'm skeptical of this vs just using a github workflow.

mike_hearn | 10 hours ago

I'm looking for more general solutions. "Properly configure Trivy" is too specific, it's obvious in hindsight but not before.

Privilege escalation on macOS is very hard indeed. Apple have been improving security for a long time, it is far, far ahead of Linux or Windows in this regard. The default experience in Xcode is that a release-mode app you make will be sandboxed, undebuggable, have protected keychain entries other apps can't read, have a protected file space other apps can't read, and its own code will also be read-only to other apps. So apps can't interfere with each other or escalate to each other's privileges even when running as the same UNIX user. And that's the default, you don't have to do anything to get that level of protection.

staticassertion | 7 hours ago

Privesc is trivial on every desktop OS if you run as a regular user. I can write to your rc files so it's game over.

App Store apps are the exception, which is great, but presumably we're not talking about that? If we are, then yeah, app stores solve these problems by making things actually sandboxed.

mike_hearn | 4 hours ago

Any app can be sandboxed on macOS and by default newly created apps are; that's why I say if you create a new app in Xcode then anything run by that app is sandboxed out of the box. App Store enforces it but beyond that isn't involved.

tedivm | a day ago

This problem is solved by not having a token. Github and PyPI both support OIDC based workflows. Grant only the publish job access to OIDC endpoint, then the Trivy job has nothing it can steal.

NewJazz | 14 hours ago

Are you spelling it with two vs on purpose?

outside2344 | a day ago

Is it just in 1.82.8 or are previous versions impacted?

Imustaskforhelp | a day ago

1.82.7 is also impacted if I remember correctly.

GrayShade | a day ago

1.82.7 doesn't have litellm_init.pth in the archive. You can download them from pypi to check.

EDIT: no, it's compromised, see proxy/proxy_server.py.

cpburns2009 | a day ago

1.82.7 has the payload in `litellm/proxy/proxy_server.py` which executes on import.

bognition | a day ago

The decision to block all downloads is pretty disruptive, especially for people on pinned known good versions. Its breaking a bunch of my systems that are all launched with `uv run`

cpburns2009 | a day ago

That's PyPI's behavior when they quarantine a package.

Shank | a day ago

> Its breaking a bunch of my systems that are all launched with `uv run`

From a security standpoint, you would rather pull in a library that is compromised and run a credential stealer? It seems like this is the exact intended and best behavior.

MeetingsBrowser | a day ago

Are you sure you are pinned to a “known good” version?

No one initially knows how much is compromised

tedivm | a day ago

You should be using build artifacts, not relying on `uv run` to install packages on the fly. Besides the massive security risk, it also means that you're dependent on a bunch of external infrastructure every time you launch. PyPI going down should not bring down your systems.

zbentley | a day ago

This is the right answer. Unfortunately, this is very rarely practiced.

More strangely (to me), this is often addressed by adding loads of fallible/partial caching (in e.g. CICD or deployment infrastructure) for package managers rather than building and publishing temporary/per-user/per-feature ephemeral packages for dev/testing to an internal registry. Since the latter's usually less complex and more reliable, it's odd that it's so rarely practiced.

lanstin | a day ago

There are so many advantages to deployable artifacts, including audibility and fast roll-back. Also you can block so many risky endpoints from your compute outbound networks, which means even if you are compromised, it doesn't do the attacker any good if their C&C is not allow listed.

saidnooneever | a day ago

known good versions and which are those exactly??????

zbentley | a day ago

That's a good thing (disruptive "firebreak" to shut down any potential sources of breach while info's still being gathered). The solve for this is artifacts/container images/whatnot, as other commenters pointed out.

That said, I'm sorry this is being downvoted: it's unhappily observing facts, not arguing for a different security response. I know that's toeing the rules line, but I think it's important to observe.

wasmitnetzen | 9 hours ago

Take this as an argument to rethink your engineering decision to base your workflows entirely on the availability of an external dependency.

kleton | a day ago

There are hundreds of PRs fixing valid issues to your github repo seemingly in limbo for weeks. What is the maintainer state over there?

zparky | a day ago

Not really the time for that. There's also PRs being merged every hour of the day.

michh | a day ago

increasing the (social) pressure on maintainers to get PRs merged seems like the last thing you should be doing in light of preventing malicious code ending up in dependencies like this

i'd much rather see a million open PRs than a single malicious PR sneak through due to lack of thorough review.

detente18 | a day ago

Update:

- Impacted versions (v1.82.7, v1.82.8) have been deleted from PyPI - All maintainer accounts have been changed - All keys for github, docker, circle ci, pip have been deleted

We are still scanning our project to see if there's any more gaps.

If you're a security expert and want to help, email me - krrish@berri.ai

cosmicweather | a day ago

> All maintainer accounts have been changed

What about the compromised accounts(as in your main account)? Are they completely unrecoverable?

detente18 | 2 hours ago

I deleted it, to be safe.

MadsRC | a day ago

Dropped you a mail from mads.havmand@nansen.ai

ozozozd | a day ago

Kudos for this update.

Write a detailed postmortem, share it publicly, continue taking responsibility, and you will come out of this having earned an immense amount respect.

harekrishnarai | a day ago

> it seems your personal account is also compromised. I just checked for the github search here https://github.com/search?q=%22teampcp+owns%22

vintagedave | a day ago

This must be super stressful for you, but I do want to note your "I'm sorry for this." It's really human.

It is so much better than, you know... "We regret any inconvenience and remain committed to recognising the importance of maintaining trust with our valued community and following the duration of the ongoing transient issue we will continue to drive alignment on a comprehensive remediation framework going forward."

Kudos to you. Stressful times, but I hope it helps to know that people are reading this appreciating the response.

cyanydeez | a day ago

Lawyers are slowly eating humanity.

singleshot_ | 23 hours ago

Allegedly*

bmurphy1976 | 20 hours ago

For now. They're about to get hit by the AI wave as bad as us software devs. Who knows what's on the other side of this.

blueone | 20 hours ago

Sorry that I have to be the one to tell you this, but lawyers are fine. Sure, AI will have an impact, but nothing like the once hyped idea that it would replace lawyers. It has actually been amusing to watch the hype cycle play out around AI when it comes to lawyers.

throwawaytea | 18 hours ago

My parents had a weird green card and paperwork issue that was becoming a big problem. Everyone in their social circle recommended an immigration type lawyer. Everyone.

My dad was confident he could figure it out based on his perplexity Pro account. He attacked the problem from several angles and used it for help with what to do, how to do it, what to ask for when visiting offices, how to press them to move forward, and tons of other things.

Got the problem resolved.

So it definitely can reduce hiring lawyers even.

recpen | 11 hours ago

Lawyership in the sense of the profession may survive and adapt. Individual lawyers, not so much. I strongly doubt the new equilibrium (if we ever reach one) will need so many lawyers.

Same logic for software developers.

nextos | 19 hours ago

I think we really need to use sandboxes. Guix provides sandboxed environments by just flipping a switch. NixOS is in an ideal position to do the same, but for some reason they are regarded as "inconvenient".

Personally, I am a heavy user of Firejail and bwrap. We need defense in depth. If someone in the supply chain gets compromised, damage should be limited. It's easy to patch the security model of Linux with userspaces, and even easier with eBPF, but the community is somehow stuck.

staticassertion | 17 hours ago

What would be really helpful is if software sandboxed itself. It's very painful to sandbox software from the outside and it's radically less effective because your sandbox is always maximally permissive.

But, sadly, there's no x-platform way to do this, and sandboxing APIs are incredibly bad still and often require privileges.

> It's easy to patch the security model of Linux with userspaces, and even easier with eBPF, but the community is somehow stuck.

Neither of these is easy tbh. Entering a Linux namespace requires root, so if you want your users to be safe then you have to first ask them to run your service as root. eBPF is a very hard boundary to maintain, requiring you to know every system call that your program can make - updates to libc, upgrades to any library, can break this.

Sandboxing tooling is really bad.

ashishb | 16 hours ago

> It's very painful to sandbox software from the outside and it's radically less effective because your sandbox is always maximally permissive.

Not really.

Let's say I am running `~/src/project1 $ litellm`

Why does this need access to anything outside of `~/src/project1`?

Even if it does, you should expose exactly those particular directories (e.g. ~/.config) and nothing else.

staticassertion | 16 hours ago

How are you setting that sandbox up? I've laid out numerous constraints - x-platform support is non-existent for sandboxing, sandboxing requires privileges to perform, whole-program sandboxing is fundamentally weaker, maintenance of sandboxing is best done by developers, etc.

> Even if it does, you should expose exactly those particular directories (e.g. ~/.config) and nothing else.

Yes, but now you are in charge of knowing every potential file access, network access, or possibly even system call, for a program that you do not maintain.

ashishb | 15 hours ago

> Yes, but now you are in charge of knowing every potential file access, network access, or possibly even system call, for a program that you do not maintain.

Not really. I try to capture the most common ones for caching [1], but if I miss it, then it is just inefficient, as it is equivalent to a cache miss.

I'll emphasize again, "no linter/scanner/formatter (e.g., trivy) should need full disk access".

1 - https://github.com/ashishb/amazing-sandbox/blob/fddf04a90408...

staticassertion | 15 hours ago

Okay, so you're using docker. Cool, that's one of the only x-plat ways to get any sandboxing. Docker itself is privileged and now any unsandboxed program on your computer can trivially escalate to root. It also doesn't limit nearly as much as a dev-built sandbox because it has to isolate the entire process.

Have you solved for publishing? You'll need your token to enter the container or you'll need an authorizing proxy. Are cache volumes shared? In that case, every container is compromised if one is. All of these problems and many more go away if the project is built around them from the start.

It's perfectly nice to wrap things up in docker but there's simply no argument here - developers can write sandboxes for their software more effectively because they can architect around the sandbox, you have to wrap the entire thing generically to support its maximum possible privileges.

ashishb | 15 hours ago

> Docker itself is privileged and now any unsandboxed program on your computer can trivially escalate to root.

Inside the sandbox but not on my machine. Show me how it can access an unmounted directory.

> Have you solved for publishing? You'll need your token to enter the container or you'll need an authorizing proxy.

Amazing-sandbox does not solve for that. The current risk is contamination; if you are running `trivy`, it should not need access to tokens in a different env/directory.

> All of these problems and many more go away if the project is built around them from the start.

Please elaborate on your approach that will all me to run markdown/JS/Python/Go/Rust linters and security scanners. Remember that `trivy` which caused `litellm` compromise is a security scanner itself.

> developers can write sandboxes for their software more effectively because they can architect around the sandbox,

Yeah, let's ask 100+ linter providers to write sandboxes for you. I can't even get maintainers to respond to legitimate & trivial PRs many a time.

staticassertion | 15 hours ago

I'm not going to code review your sandbox project for you.

sellmesoap | 9 hours ago

> Inside the sandbox but not on my machine. Show me how it can access an unmounted directory.

So it says right on the tin of my favorite distro: 'Warning: Beware that the docker group membership is effectively equivalent to being root! Consider using rootless mode below.' So # docker run super-evil-oci-container with a bind mount or two and your would-be attacker doesn't need to guess your sudo password.

imtringued | 7 hours ago

What's particularly vexing is that there is this agentic sandboxing software called "container-use" and out of the box it requires you to add a user to the docker group because they haven't thought about what that really means and why running docker in that configuration option shouldn't be allowed, but instead they have made it mandatory as a default.

ashishb | 2 hours ago

> docker run super-evil-oci-container

  1. That super evil OCI container still needs to find a vulnerability in Docker
  2. You can run Docker in rootless mode e.g. Orbstack runs without root

eichin | 16 hours ago

If the whole point of sandboxing is to not trust the software, it doesn't make sense for the software to do the sandboxing. (At most it should have a standard way to suggest what access it needs, and then your outside tooling should work with what's reasonable and alert on what isn't.) The android-like approach of sandboxing literally everything works because you are forced to solve these problems generically and at scale - things like "run this as a distinct uid" are a lot less hassle if you're amortizing it across everything.

(And no, most linux namespace stuff does not require root, the few things that do can be provided in more-controlled ways. For examples, look at podman, not docker.)

staticassertion | 15 hours ago

> If the whole point of sandboxing is to not trust the software, it doesn't make sense for the software to do the sandboxing.

That's true, sort of. I mean, that isn't the whole point of sandboxing because the threat model for sandboxing is pretty broad. You could have a process sandbox just one library, or sandbox itself in case of a vulnerability, or it could have a separate policy / manifest the way browser extensions do (that prompts users if it broadens), etc. There's still benefit to isolating whole processes though in case the process is malicious.

> (And no, most linux namespace stuff does not require root, the few things that do can be provided in more-controlled ways. For examples, look at podman, not docker.)

The only linux namespace that doesn't require root is user namespace, which basically requires root in practice. https://www.man7.org/linux/man-pages/man2/clone.2.html

Podman uses unprivileged user namespaces, which are disabled on the most popular distros because it's a big security hole.

ashishb | 16 hours ago

I am happily running all third-party tools inside the Amazing Sandbox[1]. I made it public last year.

1 - https://github.com/ashishb/amazing-sandbox

Imustaskforhelp | a day ago

I just want to share an update

the developer has made a new github account and linked their new github account to hackernews and linked their hackernews about me to their github account to verify the github account being legitimate after my suggestion

Worth following this thread as they mention that: "I will be updating this thread, as we have more to share." https://github.com/BerriAI/litellm/issues/24518

mrexcess | a day ago

You're making great software and I'm sorry this happened to you. Don't get discouraged, keep bringing the open source disruption!

kingreflex | a day ago

we're using litellm via helm charts with tags main-v1.81.12-stable.2 and main-v1.80.8-stable.1 - assuming they're safe?

also how are we sure that docker images aren't affected?

saltyoldman | a day ago

Docker deployments are more safe even if affected because there is a lower chance (but not zero) that you didn't mount all your credentials into the image. It would have access to LLM keys of course, but that's not really what the hacker is after. He's after private SSH keys.

That being said this hack was a direct upload to PyPI in the last few days, so very unlikely those images are affected.

anishgupta | 15 hours ago

yep joining here late but docker images are not affected as we saw on twitter

rao-v | 23 hours ago

I put together a little script to search for and list installed litellm versions on my systems here: https://github.com/kinchahoy/uvpowered-tools/blob/main/inven...

It's very much not production grade. It might miss sneaky ways to install litellm, but it does a decent job of scanning all my conda, .venv, uv and system enviornments without invoking a python interpreter or touching anything scary. Let me know if it misses something that matters.

Obviously read it before running it etc.

mikert89 | 22 hours ago

Similar to delve, this guy has almost no work experience. You have to wonder if YC and the cult of extremely young founders is causing instability issues in society at large?

zdragnar | 21 hours ago

Welcome to the new era, where programming is neither a skill nor a trade, but a task to be automated away by anyone with a paid subscription.

mikert89 | 21 hours ago

alot of software isnt that important so its fine, but some actually is important. especially with a branding name slapped on it that people will trust

whattheheckheck | 16 hours ago

The industry needs to step up and plant a flag for professionalization certifications for proper software engineering. Real hard exams etc

jacamera | 8 hours ago

I can't even imagine what these exams would look like. The entire profession seems to boil down to making the appropriate tradeoffs for your specific application in your specific domain using your specific tech stack. There's almost nothing that you always should or shouldn't do.

leftyspook | 6 hours ago

All software runs on somebody's hardware. Ultimately even an utterly benign program like `cowsay` could be backdoored to upload your ssh keys somewhere.

utrack | 6 hours ago

https://xkcd.com/2347/ , but with `fortune -a` and `cowsay` instead of imagemagick

gopher_space | 20 hours ago

It's interesting to see how the landscape changes when the folks upstream won't let you offload responsibility. Litellm's client list includes people who know better.

moomoo11 | 14 hours ago

It’s a flex now. But there are still many people doing it for the love of the game.

pojzon | 21 hours ago

This is just one of many projects that was a victim of Trivy hack. There are millions of those projects and this issue will be exploited in next months if not years.

driftnode | 13 hours ago

the chain here is wild. trivy gets compromised, that gives access to your ci, ci has the pypi publish token, now 97 million monthly downloads are poisoned. was the pypi token scoped to publishing only or did it have broader access? because the github account takeover suggests something wider leaked than just the publish credential

kreelman | 9 hours ago

I wonder if there are a few things here....

It would be great if Linux was able to do simple chroot jails and run tests inside of them before releasing software. In this case, it looks like the whole build process would need to be done in the jail. Tools like lxroot might do enough of what chroot on BSD does.

It seems like software tests need to have a class of test that checks whether any of the components of an application have been compromised in some way. This in itself may be somewhat complex...

We are in a world where we can't assume secure operation of components anymore. This is kinda sad, but here we are....

driftnode | 4 hours ago

The sad part is you're right that we can't assume secure operation of components anymore, but the tooling hasn't caught up to that reality. Chroot jails help with runtime isolation but the attack here happened at build time, the malicious code was already in the package before any test could run. And the supply chain is deep. Trivy gets compromised, which gives CI access, which gives PyPI access. Even if you jail your own builds you're trusting that every tool in your pipeline wasn't the entry point. 97 million monthly downloads means a lot of people's "secure" pipelines just ran attacker code with full access.

sobellian | 4 hours ago

If the payload is a credential stealer then they can use that to escalate into basically anything right?

driftnode | 4 hours ago

Yes and the scary part is you might never know the full extent. A credential stealer grabs whatever is in memory or env during the build, ships it out, and the attacker uses those creds weeks later from a completely different IP. The compromised package gets caught and reverted, everyone thinks the incident is over, meanwhile the stolen tokens are still valid. I wonder how many teams who installed 1.82.7 actually rotated all their CI secrets after this, not just uninstalled the bad version.

N_Lens | 12 hours ago

Good work! Sorry to hear you're in this situation, good luck and godspeed!

Blackthorn | a day ago

Edit: ignore this silliness, as it sidesteps the real problem. Leaving it here because we shouldn't remove our own stupidity.

It's pretty disappointing that safetensors has existed for multiple years now but people are still distributing pth files. Yes it requires more code to handle the loading and saving of models, but you'd think it would be worth it to avoid situations like this.

cpburns2009 | a day ago

safetensors is just as vulnerable to this sort of exploit using a pth file since it's a Python package.

Blackthorn | a day ago

Yeah, fair enough, the problem here is that the credentials were stolen, the fact that the exploit was packaged into a .pth is just an implementation detail.

cedws | a day ago

This looks like the same TeamPCP that compromised Trivy. Notice how the issue is full of bot replies. It was the same in Trivy’s case.

This threat actor seems to be very quickly capitalising on stolen credentials, wouldn’t be surprised if they’re leveraging LLMs to do the bulk of the work.

varenc | 19 hours ago

What is the rational for the attacker spamming the relevant issue with bot replies? does this benefit them? Maybe it makes discussion impossible to confuse maintainers and delay the time to a fix?

driftnode | 13 hours ago

whats new isnt the shortcuts, its the cascading. one compromised trivy instance led to kics led to litellm led to dspy and crewai and mlflow and hundreds of mcp servers downstream. the attacker didnt need to find five separate vulnerabilities, they found one and rode the dependency graph. thats a fundamentally different threat model than what most security tooling is built around

danielvaughn | a day ago

I work with security researchers, so we've been on this since about an hour ago. One pain I've really come to feel is the complexity of Python environments. They've always been a pain, but in an incident like this, where you need to find whether an exact version of a package has ever been installed on your machine. All I can say is good luck.

The Python ecosystem provides too many nooks and crannies for malware to hide in.

te_chris | a day ago

I reviewed the LiteLLM source a while back. Without wanting to be mean, it was a mess. Steered well clear.

rnjs | a day ago

Terrible code quality and terrible docs

zhisme | a day ago

Am I the only one having feeling that with LLM-era we have now bigger amount of malicious software lets say parsers/fetchers of credentials/ssh/private keys? And it is easier to produce them and then include in some 3rd party open-source software? Or it is just our attention gets focused on such things?

hmokiguess | a day ago

What’s the best way to identify a compromised machine? Check uv, conda, pip, venv, etc across the filesystem? Any handy script around?

EDIT: here's what I did, would appreciate some sanity checking from someone who's more familiar with Python than I am, it's not my language of choice.

find / -name "litellm_init.pth" -type f 2>/dev/null

find / -path '/litellm-1.82..dist-info/METADATA' -exec grep -l 'Version: 1.82.[78]' {} \; 2>/dev/null

persedes | a day ago

there's probably a more precise way, but if you're on uv:

  rg litellm  --iglob='*.lock'

lukewarm707 | a day ago

these days, i just use a private llm. it's very quick and when i see the logs, it does a better job than me for this type of task.

no i don't let it connect to web...

wswin | a day ago

I will wait with updating anything until this whole trivy case gets cleaned up.

f311a | a day ago

Their previous release would be easily caught by static analysis. PTH is a novel technique.

Run all your new dependencies through static analysis and don't install the latest versions.

I implemented static analysis for Python that detects close to 90% of such injections.

https://github.com/rushter/hexora

samsk | a day ago

Interesting tool, will definitely try - just curious, is there a tool (hexora checker) that ensures that hexora itself and its dependencies are not compromised ? And of course if there is one, I'll need another one for the hexora checker....

f311a | a day ago

There is no such tool, but you can use other static analyzers. Datadog also has one, but it's not AST-based.

hmokiguess | a day ago

ting0 | 23 hours ago

And easily bypassed by an attacker who knows about your static analysis tool who can iterate on their exploit until it no longer gets flagged.

fernandotakai | 22 hours ago

the main things are:

1. pin dependencies with sha signatures 2. mirror your dependencies 3. only update when truly necessary 4. at first, run everything in a sandbox.

santiagobasulto | a day ago

I blogged about this last year[0]...

> ### Software Supply Chain is a Pain in the A*

> On top of that, the room for vulnerabilities and supply chain attacks has increased dramatically

AI Is not about fancy models, is about plain old Software Engineering. I strongly advised our team of "not-so-senior" devs to not use LiteLLM or LangChain or anything like that and just stick to `requests.post('...')".

[0] https://sb.thoughts.ar/posts/2025/12/03/ai-is-all-about-soft...

eoskx | a day ago

Valid, but for all the crap that LangChain gets it at least has its own layer for upstream LLM provider calls, which means it isn't affected by this supply chain compromise (unless you're using the optional langchain-litellm package). DSPy uses LiteLLM as its primary way to call OpenAI, etc. and CrewAI imports it, too, but I believe it prefers the vendor libraries directly before it falls back to LiteLLM.

driftnode | 13 hours ago

the requests.post advice is right but its also kind of depressing that the state of the art recommendation for using llm apis safely in 2026 is to just write the http call yourself. we went from dont reinvent the wheel to actually maybe reinvent it because the wheel might steal your ssh keys. the abstraction layer that was supposed to save you time just cost an unknown number of people every credential on their machine

tom-blk | a day ago

Stuff like is happening too much recently. Seems like the more fast paced areas of development would benefit from a paradigm shift

sirl1on | a day ago

Move Slow and Fix Things.

segalord | a day ago

LiteLLM has like a 1000 dependencies this is expected https://github.com/BerriAI/litellm/blob/main/requirements.tx...

zahlman | 20 hours ago

Oof. What exactly is supposed to be "lite" about this?

mark_l_watson | a day ago

A question from a non-python-security-expert: is committing uv.lock files for specific versions, and only infrequently updating versions a reasonable practice?

Imustaskforhelp | a day ago

(I am not a security expert either)

But, one of the arguments that I saw online from this was that when a security researcher finds a bug and reports it to the OSS project/Company they then fix the code silently and include it within the new version and after some time, they make the information public

So if you run infrequently updated versions, then you run a risk of allowing hackers access as well.

(An good example I can think of is OpenCode which had an issue which could allow RCE and the security researcher team asked Opencode secretly but no response came so after sometime of no response, they released the knowledge in public and Opencode quickly made a patch to fix that issue but if you were running the older code, you would've been vulnerable to RCE)

mark_l_watson | a day ago

Good points. Perhaps there is a way to configure uv to only use a new version if it is 24 hours old?
You can. See: https://docs.astral.sh/uv/reference/cli/#uv-run--exclude-new...

How you use it depends on your workflow. An entry like this in your pyproject.toml could suffice:

  [tool.uv]
  exclude-newer = "5 days"

mark_l_watson | 6 hours ago

thank you!

abhisek | a day ago

We just analysed the payload. Technical details here: https://safedep.io/malicious-litellm-1-82-8-analysis/

We are looking at similar attack vectors (pth injection), signatures etc. in other PyPI packages that we know of.

johnhenry | a day ago

I've been developing an alternative to LiteLLM. Javascript. No dependencies. https://github.com/johnhenry/ai.matey/

homanp | a day ago

How were they compromised? Phishing?

hmokiguess | a day ago

what's up with everyone in the issue thread thanking it, is this an irony trend or is that a flex on account takeover from teampcp? this feels wild

Shank | a day ago

I wonder at what point ecosystems just force a credential rotation. Trivy and now LiteLLM have probably cleaned out a sizable number of credentials, and now it's up to each person and/or team to rotate. TeamPCP is sitting on a treasure trove of credentials and based on this, they're probably carefully mapping out what they can exploit and building payloads for each one.

It would be interesting if Python, NPM, Rubygems, etc all just decided to initiate an ecosystem-wide credential reset. On one hand, it would be highly disruptive. On the other hand, it would probably stop the damage from spreading.

post-it | 23 hours ago

It'll only be disruptive to people who are improperly managing their credentials. Cattle not pets applies to credentials too.

saidnooneever | a day ago

just wanna state this can litterally happen to anyone within this messy package ecosystem. maintainer seems to be doing his best

if you have tips i am sure they are welcome. snark remarks are useless. dont be a sourpuss. if you know better, help the remediation effort.

eoskx | a day ago

Also, not surprising that LiteLLM's SOC2 auditor was Delve. The story writes itself.

saganus | a day ago

Would a proper SOC2 audit have prevented this?

I've been through SOC2 certifications in a few jobs and I'm not sure it makes you bullet proof, although maybe there's something I'm missing?

shados | a day ago

SOC2 is just "the process we say we have, is what we do in practice". The process can be almost anything. Some auditors will push on stuff as "required", but they're often wrong.

But all it means in the end is you can read up on how a company works and have some level of trust that they're not lying (too much).

It makes absolutely zero guarantees about security practices, unless the documented process make these guarantees.

saganus | a day ago

Yeah, that was my understanding as well, so I fail to see how a proper SOC2 would have prevented this.

I mean ideally a proper SOC2 would mean there are processes in place to reduce the likelihood of this happening, and then also processes to recover from if it did ended up happening.

But the end result could've been essentially the same.

kyyol | a day ago

It wouldn't have. lol.

stevekemp | a day ago

Just so long as it was a proper SOC2 audit, and not a copy-pasted job:

https://news.ycombinator.com/item?id=47481729

syllogism | a day ago

Maintainers need to keep a wall between the package publishing and public repos. Currently what people are doing is configuring the public repo as a Trusted Publisher directly. This means you can trigger the package publication from the repo itself, and the public repo is a huge surface area.

Configure the CI to make a release with the artefacts attached. Then have an entirely private repo that can't be triggered automatically as the publisher. The publisher repo fetches the artefacts and does the pypi/npm/whatever release.

saidnooneever | a day ago

this kind of compromise is why a lot of orgs have internal mirrors of repos or package sources so they can stay behind few versions to avoid latest and compromise. seen it with internal pip repos, apt repos etc.

some will even audit each package in there (kind crap job but it works fairly well as mitigation)

syllogism | a day ago

Just keeping a lockfile and updating it weekly works fine for that too yeah

anderskaseorg | a day ago

The point of trusted publishing is supposed to be that the public can verifiably audit the exact source from which the published artifacts were generated. Breaking that chain via a private repo is a step backwards.

https://docs.npmjs.com/generating-provenance-statements

https://packaging.python.org/en/latest/specifications/index-...

cpburns2009 | a day ago

Looks like litellm is no longer in quarantine on PyPI, and the compromized versions (1.82.7 and 1.82.8) have been removed [1].

[1]: https://pypi.org/project/litellm/#history

claudiug | a day ago

LiteLLM's SOC2 auditor was Delve :))

dev_tools_lab | a day ago

Good reminder to pin dependency versions and verify checksums. SHA256 verification should be standard for any tool that makes network calls.

Aeroi | a day ago

whats up with the hundreds of bot replys on github to this?

zahlman | 20 hours ago

It seems to be a deliberate attempt to interfere with people discussing mitigations etc.

faxanalysis | a day ago

This is secure bug impacting PyPi v1.82.7, v1.82.8. The idea of bracketing r-w-x mod package permissions for group id credential where litellm was installed.

macNchz | a day ago

Was curious—good number of projects out there with an un-pinned LiteLLM dependencies in their requirements.txt (628 matches): https://github.com/search?q=path%3A*%2Frequirements.txt%20%2...

or pyproject.toml (not possible to filter based on absence of a uv.lock, but at a glance it's missing from many of these): https://github.com/search?q=path%3A*%2Fpyproject.toml+%22%5C...

or setup.py: https://github.com/search?q=path%3A*%2Fsetup.py+%22%5C%22lit...

canberkh | a day ago

helpful

lightedman | a day ago

Write it yourself, fuzz/test it yourself, and build it yourself, or be forever subject to this exact issue.

This was taught in the 90s. Sad to see that lesson fading away.

noobermin | a day ago

I have to say, the long line of comments from obvious bots thanking the opener of the issue is a bit too on the nose.

zahlman | 20 hours ago

It doesn't need to be subtle if the goal is just to drown out actual discussion.

foota | a day ago

Somewhat unrelated, but if I have downloaded node modules in the last couple days, how should I best figure out if I've been hacked?

rvz | a day ago

What do we have here? Unaudited software completely compromised with a fake SOC 2 and ISO 27001 certification.

An actual infosec audit would have rigorously enforced basic security best practices in preventing this supply chain attack.

[0] https://news.ycombinator.com/item?id=47502754

westoque | a day ago

my takeaway from this is that it should now be MANDATORY to have an LLM do a scan on the entire codebase prior to release or artifact creation. do NOT use third party plugins for this. it's so easy to create your own github action to digest the whole codebase and inspect third party code. it costs tokens yes but it's also cached and should be negligible spend for the security it brings.
Ironically, Trivy was the first known compromised package and its purpose is to scan container images to make sure they don't contain vulnerabilities. Kinda like the LLM in your scenario.

jimmySixDOF | 20 hours ago

Not sure that Trivy was doing that itself but zizmor is probably better than starting with an LLM :

https://github.com/zizmorcore/zizmor

cowpig | a day ago

Tried running the compromised package inside Greywall, because theoretically it should mitigate everything but in practice it just forkbombs itself?

aborsy | a day ago

What is the best way to sandbox LLMs and packages in general, while being able to work on data from outside sandbox (get data in and out easily)?

There is also the need for data sanitation, because the attacker could distribute compromised files through user’s data which will later be run and compromise the host.

cowpig | a day ago

Just wrote up a quick article on how greywall[0] prevents this attack:

https://greyhaven.co/insights/how-greywall-prevents-every-st...

[0] https://greywall.io/

ashishb | 16 hours ago

I wrote this[1] for myself last year. It only gives access to the current directory (and a few others - see README). So, it drastically reduces the attack surface of running third-party Python/Go/Rust/Haskell/JS code on your machine.

1 - https://github.com/ashishb/amazing-sandbox

smakosh | a day ago

ting0 | a day ago

I've been waiting for something like this to happen. It's just too easy to pull off. I've been hard-pinning all of my versions of dependencies and using older versions in any new projects I set up for a little while, because they've generally at least been around long enough to vet. But even that has its own set of risks (for example, what if I accidently pin a vulnerable version). Either that, or I fork everything, including all the deps, run LLMs over the codebase to vet everything.

Even still though, we can't really trust any open-source software any more that has third party dependencies, because the chains can be so complex and long it's impossible to vet everything.

It's just too easy to spam out open-source software now, which also means it's too easy to create thousands of infected repos with sophisticated and clever supply chain attacks planted deeply inside them. Ones that can be surfaced at any time, too. LLMs have compounded this risk 100x.

MarsIronPI | 23 hours ago

> Even still though, we can't really trust any open-source software any more that has third party dependencies, because the chains can be so complex and long it's impossible to vet everything.

This is why software written in Rust scares me. Almost all Rust programs have such deep dependency trees that you really can't vet them all. The Rust and Node ecosystems are the worst for this, but Python isn't much better. IMO it's language-specific package managers that end up causing this problem because they make it too easy to bring in dependencies. In languages like C or C++ that traditionally have used system package managers the cost of adding a dependency is high enough that you really avoid dependencies unless they're truly necessary.

consp | 19 hours ago

> Almost all Rust programs have such deep dependency trees that you really can't vet them all.

JS/TS > Screems aloud! never do "npm import [package containing entire world as dependency]"

Rust > Just import everything since rust fixes everything.

When you design your package management and doctrine like de facto javascript your have failed like javascript.

Pinning doesn’t help you. They can replace the package and you’ll get the new one. You have to vendor the dependencies.

davidatbu | 9 hours ago

I don't think pypi or npm allow replacing existing packages?

ctmnt | 8 hours ago

They absolutely do. In this case litellm 1.82.8 had been out for at least a week (can’t recall the exact date offhand). The compromised version was a replacement.

[OP] dot_treo | 7 hours ago

It actually wasn't. That was one of the reasons why I looked into what was changed. Even 1.82.6 is only at an RC release on github since just before the incident.

So the fact that 1.82.7 and then 1.82.8 were released within an hour of each other was highly suspicious.

cpburns2009 | 5 hours ago

1.82.7 and 1.82.8 were only up for about 3 hours before they were quarantined on PyPI.

ctmnt | 4 hours ago

Ah, my mistake! Thanks for the correction.

But I believe you can replace versions on both, nonetheless. It’s a multi step process, unpublish then publish again. But the net effect is the same.

somehnguy | 23 hours ago

Perhaps I'm missing something obvious - but what's up with the comments on the reported issue?

Hundreds of downvoted comments like "Worked like a charm, much appreciated.", "Thanks, that helped!", and "Great explanation, thanks for sharing."

kamikazechaser | 23 hours ago

Compromised accounts. The malware targeted ~/.git-credentials.

santiago-pl | 23 hours ago

It looks like Trivy was compromised at least five days ago. https://www.wiz.io/blog/trivy-compromised-teampcp-supply-cha...
Reminded me of a similar story at openSSH, wonderfully documented in a "Veritasium" episode, which was just fascinating to watch/listen.

https://www.youtube.com/watch?v=aoag03mSuXQ

zahlman | 20 hours ago

The xz compromise was not "at openSSH", and worked very differently.

ilusion | 22 hours ago

Does this mean opencode (and other such agent harnesses that auto update) might also be compromised?
Exactly what I needed, thanks.

Nayjest | 22 hours ago

Use secure and minimalistic lm-proxy instead:

https://github.com/Nayjest/lm-proxy

``` pip install lm-proxy ```

Guys, sorry, as the author of a competing opensource product, I couldn’t resist

sudorm | 22 hours ago

are there any timestamps available when the malicious versions were published on pypi? I can't find anything but that now the last "good" version was published on march 22.

sudorm | 22 hours ago

according to articles the first malicious version was published at roughly 8:30 UTC and the pypi repo taken down at ~11:25 UTC.

dweinstein | 21 hours ago

https://github.com/dweinstein/canary

I made this tool for macos systems that helps detect when a package accesses something it shouldn't. it's a tiny go binary (less than 2k LOC) with no dependencies that will mount a webdav filesystem (no root) or NFS (root required) with fake secrets and send you a notification when anything accesses it. Very stupid simple. I've always really liked the canary/honeypot approach and this at least may give some folks a chance to detect (similar to like LittleSnitch) when something strange is going on!

Next time the attack may not have an obvious performance issue!

huevosabio | 15 hours ago

This is clever, and also interesting in that it could help stop the steal as it happens (though of course not perfect).

dweinstein | 3 hours ago

thanks for your feedback!

that's a really good point and could be an interesting thing to play with as an extension. Since we potentially know which process is doing the "read" we could ask the user if it's ok to kill it. obviously the big issue is that we don't know how much has already been shipped off the system at that point but at least we have some alert to make some tough decisions.

someguyornotidk | 10 hours ago

Thank you for sharing this!

I always wanted to mess with building virtual filesystems but was unwilling to venture outside the standard library (i.e. libfuse) for reasons wonderfully illustrated in this thread and elsewhere. Somehow the idea of implementing a networked fs protocol and leaving system integration to the system never crossed my mind.

I'm glad more people are taking this stance. Large centralized standard libraries and minimal audited dependencies is really the only way to achieve some semblance of security. There is simply no other viable approach.

Edit: What's the license for this project?

dweinstein | 3 hours ago

hi, glad you like it and that it encourages you to try some things you've always wanted to do :-)

I was thinking for the license I'd do GPLv3. Would that work for you?

tonymet | 20 hours ago

I recommend scanning all of your projects with osv-scanner in non-blocking mode

   # add any dependency file patterns
   osv-scanner -r .
as your projects mature, add osv-scanner as a blocking step to fail your installs before the code gets installed / executed.

datadrivenangel | 20 hours ago

This among with some other issues makes me consider ejecting and building my own LLM shim. The different model providers are bespoke enough even within litellm that it sometimes seems like a lot of hassle for not much benefit.

Also the repo is so active that it's very hard to understand the state of issues and PRs, and the 'day 0' support for GPT-5.4-nano took over a week! Still, tough situation for the maintainers who got hacked.

ps06756 | 20 hours ago

Can someone help enlighten why would someone use LiteLLM over say AWS Bedrock ? Or build a lightweight router and directly connect to the model provider?

mathis-l | 20 hours ago

CrewAI (uses litellm) pinned it to 1.82.6 (last good version) 5 hours ago but the commit message does not say anything about a potential compromise. This seems weird. Is it a coincidence? Shouldn’t users be warned about a potential compromise?

https://github.com/crewAIInc/crewAI/commit/8d1edd5d65c462c3d...

mathis-l | 20 hours ago

r2vcap | 20 hours ago

Does the Python ecosystem have anything like pnpm’s minimumReleaseAge setting? Maybe I’m being overly paranoid, but it feels like every internet-facing ecosystem should have something like this.

arrty88 | 19 hours ago

Oooof another one. I think i will lock my deps to versions at least 3 months old.

gaborbernat | 19 hours ago

Recommend reading related blog post https://bernat.tech/posts/securing-python-supply-chain

saharhash | 18 hours ago

Easy tool to check if you/other repos were expoed https://litellm-compromised.com

getverdict | 17 hours ago

Supply chain compromises in AI tooling are becoming structural, not exceptional. We've seen similar patterns in the last 6 months — Zapier's npm account (425 packages, Shai Hulud malware) and Dify's React2Shell incident both followed the same vector: a trusted package maintainer account as the entry point. The blast radius keeps growing as these tools get embedded deeper into production pipelines.

agentictrustkit | 17 hours ago

I think this gets a lot worse when we look at it from an agentic perspective. Like when a dev person hits a compromising package, there's usually a "hold on, that's weird" moment before a catastrophe. An agent doesn't have that instinct.

Oh boy supply chain integrity will be an agent governenace problem, not just a devops one. If you send out an agent that can autonomously pull packages, do code, or access creds, then the blast radius of compromises widens. That's why I think there's an argument for least-privilege by default--agents should have scoped, auditable authority over what they can install and execute, and approval for anything outside the boundaries.

dhon_ | 15 hours ago

I have older versions of litellm installed on my system - it appears to be a dependency for aider-chat (at least on NixOS)

vlovich123 | 13 hours ago

I maintain that GitHub does a piss poor job of hardening CI so that one step getting compromised doesn’t compromise all possible secrets. There’s absolutely no need for the GitHub publishing workflow to run some third party scanner and the third party scanner doesn’t need access to your pypi publishing tokens.

This stupidity is squarely on GitHub CI. Trivy is also bad here but the blast radius should have been more limited.

postalcoder | 12 hours ago

FYI, npm/bun/pnpm/uv now all support setting a minimum release age for packages.

I updated my global configs to set min release age to 7 days:

  ~/.config/uv/uv.toml
  exclude-newer = "7 days"
  
  ~/.npmrc
  min-release-age=7 # days
  
  ~/Library/Preferences/pnpm/rc
  minimum-release-age=10080 # minutes
  
  ~/.bunfig.toml
  [install]
  minimumReleaseAge = 604800 # seconds

jerrygoyal | 12 hours ago

I don't think syntax is correct for pnpm

postalcoder | 11 hours ago

Works for me?

  $ pnpm add -D typescript@6.0.2
   ERR_PNPM_NO_MATURE_MATCHING_VERSION  No matching version found for typescript@6.0.2 published by Wed Mar 18 2026..
You could also set the config this way:

  pnpm config set minimumReleaseAge 10080 --global
You may be thinking about the project-specific config, which uses YAML.

https://pnpm.io/cli/config

Do you know if there is override this specifically when I want to install a security patch? UV just claims that package doesn't exist if I ask for new version

postalcoder | 9 hours ago

Yes there is. You can use those configs as flags in the CLI to override the global config.

eg:

  npm install <package> --min-release-age 0
  
  pnpm add <package> --minimum-release-age 0
  
  uv add <package> --exclude-newer "0 days"
  
  bun add <package> --minimum-release-age 0

tomtomtom777 | 5 hours ago

I understand that this is a good idea but it does feel really weird. Add a min-release-age to see if anyone who doesn't gets bitten.

Next up, we're going to advise a minimum-release-age of 14 days, cause most other projects use 7 days.

bonoboTP | 3 hours ago

You don't have to outrun the bear, just the other guy.
There will always be early adopters.

And maybe more importantly: security tools and researchers.

avian | 10 hours ago

What's with the hundreds of comments like "This was the answer I was looking for." in that GitHub thread?

They also seem to be spilling into HN [1].

Runaway AI agents? A meme I'm to old to understand?

[1] https://news.ycombinator.com/item?id=47508315

ramimac | 9 hours ago

It's a spam flood by the attacker to complicate information sharing[1]. They did the same thing in the Trivy discussion, with many of the same accounts.[2]

[1] https://ramimac.me/teampcp/#spam-flood-litellm [2] https://ramimac.me/teampcp/#discussion-flooded

latable | 8 hours ago

So now we feel the need to add malware protection into the CI, like we put comodo on windows 7 and pray while surfing shady torrent websites ? It is pretty ironic that an extra tool used to protect against threats gets compromised and creates an even bigger threat. Some here talks about better isolation during development, CI, but the surface area is huge, and probably impractical. Even if the CI is well isolated, the produced package is compromised.

What about reducing the number of dependencies ? Integrating core functionalities in builtin language libraries ? Avoiding frequent package updates ? Avoiding immature/experimental packages from developers of unknown reliability ?

Those issues are grave. I see no future when those get rarer, and I am afraid they may wipe the open-source movement credibility.

zx8080 | 8 hours ago

> the compromise originated from the Trivy dependency used in our CI/CD security scanning workflow.

What is the source of compromise?

Does anyone have a list of other compromised projects?

Bullhorn9268 | 4 hours ago

I am from futuresearch and went through this with Callum (the OG). We did a small analysis here: https://futuresearch.ai/blog/litellm-hack-were-you-one-of-th... of the packages and also build this mini tool to analyze the likelihood of you getting pwned through this: https://futuresearch.ai/tools/litellm-checker/

n1tro_lab | 2 hours ago

The scariest part is LiteLLM is a transitive dependency. The person who found it wasn't even using LiteLLM directly, it got pulled in by a Cursor MCP plugin. The supply chain attack surface for AI tooling is massive because these packages get pulled in as dependencies of dependencies and nobody audits transitive installs.