Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Full on tinfoil hat here. But warranted and practical.

I'm wondering what fallout we'll see from this backdoor in the coming weeks, months or years. Was the backdoor used on obscure build servers or obscure pieces of build infrastructure somewhere? Lying dormant for a moment in future to start injecting code into built packages maybe? Are distro's going to go full-on tinfoil-hat and lock down their distribution, halting progress for long time? Are software developers (finally?) going to remove dependencies(now proven to be liabilities!), causing months of refactoring and rewriting, without any other progress?



>Are software developers (finally?) going to remove dependencies(now proven to be liabilities!), causing months of refactoring and rewriting, without any other progress?

How is that even a possibility? xz was very useful software, which can only exist if people with significant knowledge put effort into it. Not every OSS project has the ability or resources to duplicate that. The same goes for many, many other dependencies.

I believe that there is essentially nothing you can do to prevent these attacks with the current software creation model. The problem here is that it is relatively simple for a committed actor to make significant contributions to a publicly developed project, but this is also the greatest asset of that development model. It is extremely hard to judge the motivation of such an individual, for most benign contributors it is interest in the project, which they project onto their co contributors.


Agreed. More than that, there's not much of a way to preven these kinds of attacks, period, whether in software or otherwise, if the perpetrator is some intelligence agency or such.

For threats lesser than a black op, the standard way of mitigating supply chain attacks in the civilized world is through contracts, courts, and law enforcement. I could, in theory, get a job at a local food manufacturer, and over the course of year or two, reach the point I could start adding poison to the products. But you're relatively confident that this won't happen, because should it ever did, the manufacturer will be smeared and sued and they'll be very quick to find me and hand me over to the police. That's how it works for pretty much everything; that's how trust is established at scale.

Two key components of that: having responsibility over quality/fitness for use of your product, and being able to pass the blame up to your suppliers, should you be the victim too. In other words: warranty and being an easily identifiable legal entity. Exactly the two components that Open Source development does away with. Software made by a random mix of potentially pseudoanonymous people, offered with zero warranties. This is, of course, also the reason OSS is so successful. Rapid, unstructured evolution. Can't have one without the other.

Or in short: the only way I see to properly mitigate these kinds of threats is to ditch OSS and make all software commercial again (and legally force vendors to stop with the "no warranty" clause in licensing). That doesn't seem like a worthwhile trade-off to me, though.


>the only way I see to properly mitigate these kinds of threats is to ditch OSS and make all software commercial again (and legally force vendors to stop with the "no warranty" clause in licensing).

Which just pushes the problem to commercial companies getting a 'friendly' national security letter they can't talk about to anyone stating they should add REDACTED to the library they provide.


Correct. Hence the disclaimer in my first paragraph, which could also be stated as the threat duality principle, per James Mickens[0]:

"Basically, you’re either dealing with Mossad or not-Mossad. If your adversary is not-Mossad, then you’ll probably be fine if you pick a good password and don’t respond to emails from ChEaPestPAiNPi11s@virus-basket.biz.ru. If your adversary is the Mossad, YOU’RE GONNA DIE AND THERE’S NOTHING THAT YOU CAN DO ABOUT IT."

--

[0] - https://www.usenix.org/system/files/1401_08-12_mickens.pdf


Code is law. As such, the "standard way" you mention is appropriate for people with zero strategic foresight. There is no absolute need to depend on third parties to solve your problems and the possibility to limit and disperse trust to mostly yourself is real. Sure, glowies can always get to you but they can't get to you everywhere nor all the time. Security/Assurance models and both proprietary and free software architecture are already adapting to such facts.


Not all dependecies are "xz" complexity.

Minimizing dependencies, probably means keeping a few libs things like crypto or compression and such.

But do you need a library to color console output? Even if colored console output is a business critical feature, you don't need a (tree of) dependencies for that. I see so many, rather trivial software that comes with hundreds or thousands of dependencies, it's mind boggling really. Why have 124million people downloaded a rubygem that loads and parses a .env file, something I do in a bash oneliner? Why do 21k public npm packages depend on a library that does "rm -f"?

The answer, I'm afraid is mostly that people don't realize this isn't just some value added to their project but rather a liability.

Some liabilities are certainly worth it. XZ is probably one of them. But a Library that does "rm -f" certainly isn't.


It's impossible to insure against in any practical terms.

The way forward is to invest heavily in a much more security-oriented kernel(s) and make sure that each program has the bare minimum to achieve what it offers as a value-add.

The human aspect of vetting seems like an impossibly difficult game of whack-a-mole. Though realistically I doubt that the bad actors have infinite agents everywhere, this also has to be said. So maybe a "sweep" could eliminate 90% of them, though I'd be skeptical.


Agreed, as a developer: minimize your dependencies while providing your core function. Don't grant dependencies permissions they don't need. Be granular about it. Austral lets you select what filesystem, network, etc. access each library gets.

Also, in big organizations, risk assessment is more about making sure there is someone to point the finger at, than actual security. Treating libfubar as golden because it ships with something you paid another company money for makes sense in that light. But not from an actual security mindset.


"reduce the attack surface" is Security 101. Noting again that sshd doesn't natively use xz/liblzma (just libz) or systemd, so I don't think I need to point out where the billowing attack surface is ;)

Apache (by way of mod_systemd) is similarly afflicted, as is rsyslogd, I guess most contemporary daemons that need systemd to play fair are (try: "fuser -v /lib64/liblzma.so.?" and maybe "ldd /lib64/libsystemd.so.?" too).

Like a Luddite I still use Slackware and prefer to avoid creeping dependencies, ever since libkrb5 starting getting its tentacles into things more than a decade ago.


Yeah, its almost like "do one thing and do it well" had security benefits...

SElinux has the desired sort of granular permissions at the OS level, but if everything is dynamically linked to everything else that doesn't help as the tiniest lib is now part of every process and can hence pick and choose permissions.

But even if we go full monolith OS when systemd takes over the job of the kernel and the browser, that just changes where we need those permissions implemented. We can't practice zero trust when there is no mechanism for distrust in the system.


> Agreed, as a developer: minimize your dependencies while providing your core function. Don't grant dependencies permissions they don't need. Be granular about it. Austral lets you select what filesystem, network, etc. access each library gets.

Still wouldn't help for this particular exploit.


If systemd could deny liblzma any syacall or filesystem access, that would have prevented it. It is only used to compress a data stream, it only needs read access from one buffer, and write access to another. I realize there is no current mechanism for these granular permissions, that is what I was proposing be addressed.


We don't have the way to apply any restrictions on a per-library basis. This is generally quite difficult to do.


I know. That's what's missing from our current technology. I am honestly tired of everyone collectively pretending this is not a problem. Periodically we get very grim reminders that it's in fact a problem, everyone pretends to care for a month then it's all back to where it was.

It's depressing. (And no, this comment does not imply you are such. I am responding + ranting.)


I suspect most of your frustration comes from reading "we don't think this is a good place to spend our effort" as "there are no problems here".


Likely. Though people not seeing the problem is quite frustrating by itself.


In a way it would.

If a software project has hundreds of dependencies, finding that one that was compromised is hard, impossible even. But if it has three dependencies (that aide in the core functionality) keeping a keen eye on them is much easier.

When I look at a typical `node_modules` or `pipenv` directory, I see there's absolutely no way I can vet that all is safe in there. When I look at my typical cargo tree, the four or five dependencies (of dependencies) are doable to just go over every so often.

Automation helps. But that doesn't give me the confidence that just opening the project pages of the stuff that I use, once every few months does for me.


Since I didn't keep as current as I wanted to be (work and life happen a lot lately), what could have prevented it?


> The way forward is to invest heavily in a much more security-oriented kernel(s)

While I don't disagree that kernels should be secure, I also don't see how that would have helped in this case, given that (AFAICT) this attack didn't rely on any kernel vulnerabilities..


True, I wasn't specific enough. The attack exploited that nobody thinks security is a serious enough problem. It's a failure of us (the technical community) as a whole, and a very shameful one at that.


IIRC Debian has wiped and is rebuilding all their build hosts, so yes.

But while I understand what you mean, I would not call improving the security of a piece of software “halting progress”. Security improvements are progress, too. Plus, revisiting processes and assumptions can also give opportunities to improve efficiency elsewhere. Maintenance can be an opportunity if you approach it properly.


What I meant with "halting progress" is what commonly happens when a piece of software is "rewritten from scratch". Users (or clients or customers) see no improvements for years or weeks, while the business is burning money like mad.

The main reason why I am firmly opposed to "rewrite from scratch" or "we'll need some weeks to refactor¹".

Removing upstream dependencies and replacing them with other deps, with no-code, or with self-written code² is a task that takes long time during which stakeholders see no value added other than "we reduced the risk"; in case of e.g. SAAS that's not even risk these stakeholders are exposed to, so they then see "nothing improving". I'm certain a lot of managers, CTOs and developers suddenly realize that, wow, dependencies really are a liability.

¹ I am not against refactoring, just very much against refactoring as standalone, large task. Sometimes it's unavoidable because of poor/hard choices made in the past, but it's always a bad option. The good option would be "refactor on touch" - refactoring as part of our daily jobs of writing software.

² Too often do I see dependencies that are redicoulously simple. Left-pad being the posterchild. Or dependencies that bring everything and the kitchen-sink, but all we use is this one tiny bit that would've cost us less than 100 lines to write. Or dependencies to solve -- nothing really? Just that no-one took the time to go through it and remove it. And so forth and so on.


Not fallout but increased vigilance is the expected most significant outcome. Some 5+ years ago or so I listened to a Debian guy his talk about reproducible builds and security, he was stressing the audience to be aware of just happened in a very detailed manner. One of the details he mentioned was glowies having moved their focal point to individual developers and their tooling & build systems. At least some people who matter have been working on these threats for many years already, maybe more people will start to listen to them; in such case this entire debacle could have a net positive effect on the long run.


> Was the backdoor used on obscure build servers or obscure pieces of build infrastructure somewhere?

And developer machines. The backdoor was live for ~1 month on testing releases of Debian and Fedora, which are likely to be used by developers. Their computers can be scraped for passwords, access keys and API credentials for the next attack.


> Are software developers (finally?) going to remove dependencies(now proven to be liabilities!), causing months of refactoring and rewriting, without any other progress?

We've been here before, with e.g. event-stream and colors on npm. So I don't think it will change much. Except maybe people will stop blaming it on JS devs being script kiddies in their mind, when they realise that even the traditional world of C codebases and distro packages are not immune.


You can't really remove dependencies in open source. It is so intertwined at this point that doing it would be too expensive for most companies.

I think the solution is to containerize, containerize and then containerize some more times and make it all with zero trust in mind.


Containerizing is entirely the worst response here. Containers, as deployed in the real world, are basically massive binary blobs of completely uncertain origin, usually hard to reproduce, that easily permit the addition of unaudited invisible changes.

(Yes yes, I know there are some systems which try to mitigate this, but I say as deployed in the real world.)


Your application is already most likely a big binary blob of uncertain origin that's hard to reproduce. Containers allow these big binary blobs of uncertainty to at least be protected from each other.


Pretty much; updating say libssl in a "traditional" system running app, or maybe 2-3 dependent apps fixes the bug.

Put all of them in containers and now every single one needs to be rebuilt with the dep fixed and instead of having one team (ops) responsible, you now need to coordinate half of the company to do so. It's not impossible but in general much more complex, despise containers promising "simpler" operations.

...that being said I don't miss playing whack-a-mole game with developers that do not know what their apps need to be deployed on production and for some retarded reason tested their app on unstable ubuntu while all of the servers run some flavour of stable linux with a bit older libs...


Docker containers are not really a security measure.


It is a security measure. Sure it doesn't secure anything in the container itself. But it secures the container from other containers. Code can (as proven) not be trusted, but the area of effect can be reduced.


Only with additional hardening between the container and the kernel and hardware itself.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: