Hacker News new | past | comments | ask | show | jobs | submit login
Inside the failed attempt to backdoor SSH globally that got caught by chance (doublepulsar.com)
221 points by transpute 10 months ago | hide | past | favorite | 191 comments



> the world owes Andres unlimited free beer. He just saved everybody’s arse in his spare time. I am not joking.

Agreed.

> and face significant amounts of online abuse from users of the software.

Much as I’d like this to change, I suspect it will not. I’ve been doing open-source work since the mid-90s, and have suffered abuse that would curl your toes, but I still do it, for reasons that would be obscure, for many folks, hereabout.

I think the onus is on the users of the software. It’s currently way too easy to make “LEGO block” software, by splicing together components, written by others, that we do not understand (which, in itself, is not necessarily bad. We’ve been working this way for centuries). I remember reading a report that listed YAML as one of the top programming languages.

If companies insist on always taking the easy way out, they continue to introduce a significant amount of risk. Since it’s currently possible to make huge buckets of money, by writing crap, we’re not incentivizing high-Quality engineering.

I don’t think there’s any easy answer, as a lot of the most effective solutions are cultural, as opposed to technical. Yesterday, someone posted a part of an old speech, by Hyman Rickover[0], that speaks to this.

[0] https://news.ycombinator.com/item?id=39889072


If companies insist on always taking the easy way out, they continue to introduce a significant amount of risk.

This, each company needs to take some amount of responsibility for the stack that they use. A company that I worked for, sponsored upstream maintainers, sometimes for implementing specific functionality, sometimes without any specific goal. If more companies did this for their open source stacks, open source development would be much better funded.

Of course, it will always be hard to detect very sophisticated attacks from actors that can play the long game. But if maintenance is not just a hobby, PRs can receive more attention, and if it's not an added workload besides a day job, there is less churn/burn-out which can be used for malicious actors to take over a project.


"...each company needs to take some amount of responsibility for the stack that they use. A company that I worked for, sponsored upstream maintainers, sometimes for implementing specific functionality, sometimes without any specific goal. If more companies did this for their open source stacks, open source development would be much better funded."

Agreed, 100%. And yet: In a comment on Slashdot one guy said that his company had "thousands" of machines running Linux, and he was "proud" to have never paid a penny for it. I called him a parasite: open-source brings an immense value to his company, and he should support it.

My comment got a lot of hate, which I just don't understand. Sure, OSS licenses say you can do whatever you want. However, there is surely some ethical obligation to support something that your entire business depends on?


It sounds to me like that person was at least helping to legitimize and popularize the software they use. Open source needs users just as bad as it needs contributors to be successful. Linux would have died if it never had large scale adoption by "parasites".


“We’ll pay you in exposure”


Most open source contributors aren't doing it to be paid; if there isn't enough demand or support they simply stop working on it and that's it. Usually once a project is big enough then there's enough demand from users that big companies start donating both money and employee time towards a project (AWS for example had dedicated employees contributing to Redis, and Linux as a foundation is supported by countless companies). The emphasis is on the money going towards supporting the project to keep it sustained, not maximizing profit.


> Usually once a project is big enough then there's enough demand from users that big companies start donating both money and employee time towards a project

I don't agree with “usually” there, I think it is an exception rather than the rule.

Where it happens, it tends to be higher profile, top-level, visible, projects that get this treatment from the commercial bodies that rely on them. The smaller projects that they might depend upon are likely to stay less visible, less thanked, and unsupported by those using them (directly or indirectly). What has happened with xzutils is a very good example of this and the potential dangers the situation poses.

I don't know how we should address this. It certainly isn't the responsibility of people like Collin to address, unless of course they want to take on that responsibility.

> if there isn't enough demand or support they simply stop working on it and that's it

Sometimes even if there is enough demand they stop, as is their right, when other priorities come up in their life (or they simply lose interest). A “community” using their stuff does not, and should not, automatically make them beholden to that community.



I have been paid to write FOSS, I have paid others to do so, I have dragged cheques out of major banks to sponsor conferences. And yet I still do not know of a convincing way to get money from where it sits to the pockets of all the devs working on the stack, let alone in a fair manner, let alone in a way to allow the vibrant and high quality development processes that exist (ie “the government pays developers” will almost certainly be a dead hand on development)

Honestly the problem feels Like it is capitalism itself … which is weird.


It is. You don’t spend money if you don’t have to unless you are motivated by something that isn’t ‘more money‘. E.g. taxes or charity (which let’s face it is usually taxes).

Thus if the license says ‘you don’t have to pay money to use this, nor do you have to do anything at all’ it’s no wonder it gets used for nothing in return. Stallman was obviously right with the GPL keeping the source free instead of end users.


Why do you mention capitalism and not economy in general ? Any resource allocation exercise is "complex" and "hard". Probably there is a better way but I expect any progress to be slow. The open source movement started in the 80's and it took a long time to become (a bit more) mainstream.


Because they are one and the same?


I have asked that question many times [1] when confronted with this type of muhhh capitalism remark and have yet to receive a valid response. I suspect it is the result of the ideological lopsided makeup of educational institutions which churn out the message that capitalism bad, ${my_ideology} good where ${my_ideology} tends to stand for one on the left side of the political spectrum. This does not provide students with the means to actually defend their position, only with the correct slogans to use - a rather shallow base for a political conviction.

Of course HN is not the place to discuss politics so in some way it is probably for the best that these questions remain unanswered. It would be better still for those muhhh capitalism - and for that matter muhhh whatever-ism - remarks to remain unwritten since they are not supposed to be discussed here anyway.

[1] e.g. https://news.ycombinator.com/item?id=39866101


> And yet I still do not know of a convincing way to get money from where it sits to the pockets of all the devs working on the stack

This is the purpose of licensing and you're describing a failure in the popular licenses that were popularized by the FSF and similar groups.

We have been indoctrinated. It is not always appropriate to give away your source code for free to everyone, and corporations without the moral qualms that individual developers have are happy to take advantage of their naive licensing.

We, as developers, need licenses that are less liberal, such that when corporations use our code, we are properly compensated for the value we have provided to them.

I think this means a deep rethinking of how we license software, and reopening ourselves to the possibility that proprietary licensing is not always bad.


I have found the loudest complainers almost always end up the ones who are not paying.


OpenBSD's famously crusty owner along with his leads short-circuit this sort of entitlement by asking "If you have a complaint, where's your commit to fix it?"


Honestly, I loathe that attitude. Maybe OpenBSD has such a limited audience that the overlap between "people who discover problems" and "people who can fix problems" is perfect, but generally it's not.


Well, there's a drawback to that attitude, in the realm of "unintended consequences."

I wrote a little article about that: https://littlegreenviper.com/miscellany/problems-and-solutio...


To add, there's a massive difference between legitimate criticism and ungrateful entitlement. Complaints, even bad ones, are a crucial part of the feedback loop. If 99% of users say "this is terrible" with no elaboration, sure it doesn't tell you much, but it sure as shit means that there is probably something wrong that needs to be fixed.


Or it could mean that the 99%, or a large part thereof, have chosen the wrong tool for their task, which is a problem for them to fix not the tool's maker.


Could mean a lot of things, could mean nothing at all.


> This, each company needs to take some amount of responsibility for the stack that they use. A company that I worked for, sponsored upstream maintainers, sometimes for implementing specific functionality, sometimes without any specific goal. If more companies did this for their open source stacks, open source development would be much better funded.

This is good but I don’t think it’s enough to cover stuff like this which is deep in the stack and probably doesn’t have enough demand to warrant the trouble of direct funding (thinking of how many organizations make it easier to buy a $50k support contract than a $50 open source donation). I’ve been wondering if you could get companies to pay to something like the Linux Foundation, Apache, etc. for a general maintainer group tasked with basically taking something like Debian’s popcon and going down the list of things which aren’t well-established projects to pick up maintenance work.


One of the problems with the "money solution" in this case is that xz is a very small, relatively stable software. Sure things like the linux kernel, firefox, gnome or openssh could use huge donations to fund multiple developer day jobs for years. But xz is small, it doesn't need to add a lot of new features constantly. It does need maintenance, but probably only like a couple hours each month – surely not enough to warrant a full-time day job. So what does the dev do to spend the other 90% of his time (and earn the other 90% of the money)? Some people don't like juggling multiple jobs (very stressy), some corporate jobs don't allow it, plus you've done nothing to reduce the Bus-factor (ideally any vital library should have 2,3,… people working on it, but who can take of just 5% of his day job to devote to open source maintenance?)


Well, you could have a "board," that reviews and rates software. Kind of like financial rating companies.

There could be a lot of variables to the rating, like the size of the team, the language, the testing methodology, etc.

The board would need to be supported by end-users, and big steps would need to be taken to ensure true independence.

It would have to be carefully done, though, and likely immensely unpopular with the HN crowd.


You could have a "Foundation" funded by corporate stakeholders. It doesn't even have to have a scope as broad as all open source software; it could be a "Linux Foundation". Google and Amazon and Netflix and whomever else uses Linux would be members of this Foundation on some kind of tiered plan, and their membership fees would go to maintaining critical, Linux-adjacent software.


While I’m not specifically opposed to this, you have to be careful about the edge cases and how they end up being rated. SQLite is a great example for the variables you started with:

> Size of team

3, with no outside contributions accepted into the core.

> The language

C

Those two, presumably, would end up being heavily weighted towards having a low score.

> Testing methodology

Here they’d probably get some points back based on their ~10:1 test lines vs code ratio.


C is what Linux and Git are written in.

Just because the kidz doan' like it, does not mean it's bad.

1 is the loneliest number (have to be an old fart to get that), but big teams can be a problem (Camel is a horse designed by committee). I'd say the experience and track record of the principals are a big factor (see "Linux" and "Git").


i think you have it backwards. giant teams == big risks, esp. if there isn`t strict and formal processes to eval the new contributors

3 people are easy to evaluate, have been on the project forever, and presumably know what they`re doing.

C shouldn`t be a problem per se -- again, it`s been around a lot, and while it makes it easy to bork pointers, it`s not a meme language that we don`t know much about


I think the 'easy' answer is liability, same as it is for any other complex human engineering achievement. Liability though would mean at the very least allowing commit access to only identified individuals and companies that are willing to pay for insurance coverage (to gain commit access).

This would probably ruffle too many feathers from the GNU old-timers, but I really dont see any other option. We are way past the tinkering-in-the-basement days of Linux/BSD hackers when most of us just wanted a cheap Unix box to play around with or to avoid Windows. A massive percentage of the civilian (and other) infrastructure is built on the shoulders of unpaid hobbyists. There is already massive liability at the social and corporate level. Time to deal with it.

EDIT: Ok, sounds like I have to describe this better: 1) you (governments) force commercial providers to assume liability for security issues and exploits and force disclosure, etc, 2) their insurance premiums go up, 3) to reduce premiums they only use checked/secured software, 4) that means maintainers of at least the critical pieces of software get paid via the (new) channel of risk reduction. Doesnt apply to all OSS, doesnt even apply to all distros. But it creates an audit trail and potentially actual compensation for maintainers.


Sounds like a good way to kill off open source entirely. This is luckily unlikely to happen.

As for throwing money at the maintainers, honestly, it’s complicated. A lot of people aren’t doing open source work for the money. Money too often comes with strings, requirements to prioritize what the funder wants to prioritize, pressure to perform consistently, it becomes an actual job.

Not only does this turn off a lot of the types of folks who make the best contributions to these projects, but it bends the priorities toward what would make the most money for the funder. And as this article points out, real security investments often fall by the wayside when profit is involved.

So yes, companies should encourage their workers to contribute to these projects, donate money to the foundations that fund them, hire important maintainers and give them carte blanche to work on open source. But we have to be careful. Making it all completely transactional is directly contradictory to what drives a lot of the contributions.


Given the level of age discrimination in software engineering, maybe we should add a stipend to the pension plans of retired developers who work on open-source projects.

Yes, the devil is in the details, but I think the basic concept is worth exploring


At least two of the maintainers of the framework I originally authored (and is now being maintained by a team, and used worldwide), are retired engineers. They are outstanding engineers, with great pedigrees, and bring real technical leadership to the project.

In that particular project, we're all just Paying It Back (not forward), but other projects could likely benefit from the participation of Grumpy Old Farts.


We are not talking about all of open source here; there are crucial bits of code and less crucial bits of code. LZ/OpenSSH was obviously in the first category. How do you determine which ones are more critical? same as you would for a bridge or a plane: by risk, impact, etc. That's basically liability.

And obviously a non-insured piece of code that assumes no liability whatsoever can still be free and maintained via IRC, same as it ever was. I dont see how this "kills all open source".


Companies and in fact everyone has the choice to NOT use software that comes without warranty. But, of course, the cost difference will be astronomical. Alternatively companies and everyone have the choice to inspect open source software for security problems BEFORE use. Of course, astronomical cost.

This is an attempt to shift costs onto open source developers. Which, aside from being totally unjust, won't work. There's a legal expression "you can't squeeze blood out of a stone". Shifting costs onto people that can't carry those costs doesn't work for the same reason supporting a skyscraper with a toothpick doesn't work. The toothpick breaks, and when the skyscraper lies collapsed on the floor, nobody blames the toothpick. Hell, they might say the toothpick was heroic: trying to save the situation, sacrificing itself, screaming, and when nobody helped, not the government, not the owners, not ... the building collapsed and all the damage was done.

But it's even more stupid than just that. As soon as this gets introduced, and some company makes a security fix. They of course, for GPL or AGPL software, have to release their fixes. This will then make them liable for any other security problems in that same software. After all, they'll be the last ones releasing that software after the government implemented this law.

So how will you even do this, without making software fixes effectively illegal? Achieving the exact opposite of what such a law tries to achieve ... But of course, you can't have this discussion with people just looking to keep "their" free stuff but trying to shift the rest of the costs.


  liability
> Sounds like a good way to kill off open source entirely. This is luckily unlikely to happen.

It has already happened in the EU CRA, i.e. the law has passed. Implementation details still being negotiated.

https://hn.algolia.com/?query=cyber%20resilience


That only covers for profit software.


Scope fine print details are still being negotiated.

Second HN story from the link above, Dec 2023, https://news.ycombinator.com/item?id=38787005

  The Debian project has completed a general-resolution vote, adopting a statement expressing concern about the Cyber Resilience Act (CRA) pending in the European Union.

  Even if only "commercial activities" are in the scope of CRA, the Free Software community - and as a consequence, everybody - will lose a lot of small projects. CRA will force many small enterprises and most probably all self employed developers out of business because they simply cannot fulfill the requirements imposed by CRA. Debian and other Linux distributions depend on their work. If accepted as it is, CRA will undermine not only an established community but also a thriving market. CRA needs an exemption for small businesses and, at the very least, solo-entrepreneur.


Yes, other stuff happened since then.


I wouldn't mind receiving insurance coverage (and the background check required to support it) IF YOU PAID ME TO DO SO!

But we (mostly) don't even pay open source developers to write the code ... who is offering to pay them for this insurance?

Besides, this was a highly sophisticated actor. Someone willing to create several layers of loaders and invest long amounts of time into getting xz excluded from certain checks. Anyone with such sophisticated spy craft could have fooled the insurance companies also.


Expecting liability coverage for source code people publish for free on their own time has very strong implications on free speech, freedom of arts, and freedom of science. I don’t think this is possible in a liberal society.

On the other hand, you can already buy software, where the vendor takes some kind of liability: just buy Windows, AIX, or one of the commercial Linux offerings. Same for software libraries: there are commercial offerings that come with (limited) liability from the vendor. It’s even possible to create software to stronger standards. But outside of some very specific industries (aerospace, automotive, nuclear, defense, …) there doesn’t seem to be a market for that.


If you apply this system to software all progress will halt.

We’d still be using MS-DOS.


This is an easy way that will achieve the goal of completely killing free software, destroying the entire software industry in the process.

I contribute stuff for fun, for free. Now I also have to PAY to do that??? Plus anyone can just steal my identity… I have to show my ID every time I sleep at a hotel. Hundreds of people have a copy of my id and could use it to open an account in my name online…

Do you guys ever read what you write? Did you stop to think about it for more than 0.3 seconds?


I actually meant quite the opposite: that contribution should be paid. Yes, it would have to be ring-fenced so that society and the ecosystem would know who contributes what. That would also mean though that someone assumes liability for a piece of code; when you do that, you add value (economic not just source-code) and thus you should / have to be paid --by whom? the hundreds of commercial companies that use your code and whose liability you are reducing.


But every piece of software is legally held warrentless - no warranty is the heart of Microsoft, Oracle and the GPL licenses.

Yes I know the stories of “insurance made steam boilers safer”. And it’s true. But it also stopped innovation in the space before Charles Parsons came along and ignored the whole thing (military industrial aristocracy)

I think the answer sits somewhere in “have less stuff”.

We have millions of lines of code in all walks of life and Inswear we are orders of magnitude over engineered in almost all cases.

If you work for a large company try counting how many different ETL solutions exist, CSV uploaders, data lakes, warehouses and so on

Then imagine having one library to do it.

Somehow we need to get there for … everything


Agreed and I dont believe "no warranty" can last that much longer, or in fact should. It was encouraged back in the day when all this computer stuff was new and either walled-off in unis or enterprises or in hobbyist's basements. But the real risk now is in the interconnections; the potential impact is order of magnitudes larger.

The closest metaphor is cars I think. And yes you can argue that innovation in cars has slowed down but also a 'minimum floor' of safety and efficiency forced by governments and insurers has made new entrants more likely. I.e. you shouldn't need to only trust Oracle, SAP with your business because then, erm, you'd have exactly the current situation in enterprise software...


>Agreed and I dont believe "no warranty" can last that much longer, or in fact should. It was encouraged back in the day when all this computer stuff was new and either walled-off in unis or enterprises or in hobbyist's basements. But the real risk now is in the interconnections; the potential impact is order of magnitudes larger.

Ok, I can blow your mind.

You can start your own software projects and offer them with warranty. And people can join you, if they want.


And they will not want :D Not unless significant income is balancing the significant risk.

Certainly not for a few € of donations.


Why wouldn't people keep making open source, say "hey, no warranty!", but companies that use it in "load bearing contexts" have to assume liability for their choices, assuming someone enforces that.

Isn't that pretty much the way the world works now? What needs to be fixed?


> If companies insist on always taking the easy way out, they continue to introduce a significant amount of risk. Since it’s currently possible to make huge buckets of money, by writing crap, we’re not incentivizing high-Quality engineering.

For the past 10-15 years there has been a strong culture of never writing code when a library on GitHub or NPM already exists, which has, in large part, contributed to this. In many cases using existing battle tested code is the right thing, but it was taken to an extreme where the avoidance of pulling down a bunch of random open source packages with questionable stability and longevity was often maligned as not invented here syndrome.

Now many of those packages are unmaintained, the job hopping SWEs who added them to the codebase are long gone, and the chickens are coming home to roost. The pendulum will probably swing too far in the other direction, but thoughtful companies will find the right balance.


Let’s add a clause to all OSS licenses:

“…comes as-is, without warranties and without any commitment for future work. Complaints will get your feature request deprioritised, may get you banned, and will look silly to any potential employer googling your name”.

Also, let’s make it a meme to call out unreasonable behaviour: stop Jigar-Kumaring!


Perhaps the situation would improve if it were easier/more normalised to offer to pay the core developer to fix the bug that affects you. If that were the case, it would boil down to put up or shut up.

This wouldn't be entirely without downside though, as there could be a risk that the project ends up getting steered by whoever has the most money, which may be at odds with what the broader community gets from the project. That's difficult to avoid whenever Open Source developers get paid, unfortunately. If it were limited to bug-fixes I think the risk would be slim. I'm not sure if any projects have tried this.


I think the abuse is unfixable. Being exposed to many people just exposes you to many strange people. It is like how celebrities get paranoid and try to not be seen, since they are magnets for strange people that recognize them.

Being pseudo-anonymous filters out alot of credible threats though.


I'm honestly really worried about Andres. He thwarted a very expensive operation by an state actor.

Also, this backdoor was discovered by sheer chance. How many of those backdoors are still to be discovered? We should be checking all source code present in a standard Linux server right now, but all I can see is complacency because this single attempt was caught.


> thwarted a very expensive operation by a state actor

From the article:

  ..the fix for this was already in train before the XZ issue was highlighted, and long before the Github issue. The fix stopped the XZ backdoor into SSH, but hadn’t yet rolled out into a release of systemd.

  I believe there’s a good chance the threat actor realised this, and began rapidly accelerated development and deployment, hence publicly filing bug reports to try to get Ubuntu and such to upgrade XZ, as it was about to spoil several years of work. It also appears this is when they started making mistakes.


> How many of those backdoors are still to be discovered?

Since keeping such backdoor hidden in plain sight is extremely hard and required tons of preparation and social engineering spanning multiple projects, the answer is probably a function of number of those already discovered. As we don't discover years-old similar backdoors every now and then and had discovered this one pretty quickly, this might very well be the very first one that came this far.

Also, what's "sheer chance" for an individual is "enough eyeballs" for a collectivity.


I think the fact that it happened pretty much by chance means he's not more of a threat to any state actor now than before. It's not like he's suddenly the anti-chinese-backdoor guy because of this. Or maybe he is, but more in a funny infosec hall of fame kinda way. It won't be him saving us next time.


> I'm honestly really worried about Andres. He thwarted a very expensive operation by an state actor.

I don't think Andres is in serious danger, unless he is a persistent threat to the bad actors. It's true that we owe him big time for discovering the backdoor. But it could have been someone else before him. And it may be someone else the next time. Too much depends on chance for anyone to justify targeting him. They risk blowing their cover by doing that just to satisfy their ego.


Also he’s a Microsoft employee in Seattle. Unless it was the NSA’s op, the U.S. government is unlikely to ignore a suspicious act in its territory and especially not the precedent that anyone else is allowed to mess with a key infrastructure provider’s employees.


he`s not launching a campaign against Unit 61398 and correlating everything they do to specific FOSS projects. he found a random bug and started asking a few questions. whacking him would accomplish nothing; any random nerd might have found this issue.


One of the ways I’m coming to phrase this as: “You can’t outsource understanding.”


Code quality has been plunging for years while we've become more dependent upon code.

Devin AI working Upwork jobs blew minds, but it succeeded for a reason. Upwork and similar sites are plagued by low quality contractors who do little more than glue together code written by better engineers. It was never a hard formula to copy.

Outsourcing programming work to the lowest cost and quality to third-party libraries is leading to inevitable results.

Obviously, the next leap will be sophisticated supply chain attacks based upon poisoning AI.



I think it’s even more fundamental than culture, there are so many moving parts nowadays that the vast majority just aren’t smart enough to write high quality software within a normal 40 hour a week job.

e.g. The difficulty curve for writing apps went down a bit when iOS became popular, but nowadays iOS 17 is probably more complex than Snow leopard was in 2009 so there aren’t any low hanging fruit left for the median SWE.


> Let’s just keep doing the good SBOM work at CISA, and stop doing stunts around Huawei and such — Huawei is a speck of dust compared to the issues around tens of thousands of unpaid developers writing the core of the world’s most critical infrastructure nowadays.

I have to disagree with this.

There seems to be this weird mindset in tech that because there's problem X (the Five Eyes countries hacking each others' citizens at each others' request, Meta collecting data on users, a massive attack using xz that almost got into the wild, etc.) that China isn't a problem. It's this strange "our house isn't in order because of our own doing, so it doesn't matter if some dude off the street starts squatting in it" idea.

If you have a country that has a tech legacy mainly related to espionage and attacks on other countries' systems - and make no mistake, that's China's main legacy - don't buy their stuff, no matter how many times it's said that it's fine. At some point it won't be fine.

You can fix that and better secure FOSS projects; it's not one or the other.


> Secondly, a core issue here is systemd in Linux. libsystemd is linked to all systemd services, which opens a rich attack surface of third party services to backdoor.

I guarantee you more services link to glibc, and that glibc has a much, much larger attack surface than anything systemd.


Also it's not like this attack would have been impossible if systemd hadn't done that. As soon as anything loaded xz it could pretty much do what it wanted. Slightly more of a pain but not difficult.


> libsystemd is linked to all systemd servies, which opens a rich attack surface of third party services to backdoor.

Many people have warned that systemd does way too much for an init system, and presents way too much attack surface. Cluck, cluck (chickens coming home to roost).


That statement is stupid anyway.

Firstly, it's not true: most systemd services don't link to libsystemd. It's useful for supporting the sd_notify feature, but plenty of things support it without using libsystemd.

Rich attack surface: there was like... four dependencies that aren't glibc, and irrespective of this backdoor attempt they were almost all already being made dlopen dependencies, which would've prevented this backdoor from working since it would've never been triggered for sshd. (Note, also, that linking libsystemd into sshd was a downstream patch.)

Seriously:

    $ libtree $(nix build --print-out-paths nixpkgs\#systemdLibs)/lib/libsystemd.so
    libsystemd.so.0 
     └── libcap.so.2 [runpath]
That's what you get today in NixOS unstable.

The truth is that libsystemd and systemd in general don't have ridiculous dependency trees or attack surfaces. Most likely the big reason why this backdoor was being pushed heavily to try to make it into Debian and Fedora releases was because the clock was ticking on the very small in they had found in the first place.

There are a lot of criticisms of systemd that I think are reasonable, but this really isn't one.


> (Note, also, that linking libsystemd into sshd was a downstream patch.)

I feel like a lot of people are really glossing over this point. What's happened here is that Red Hat and Debian have made a choice to patch OpenSSH in a way that opened it up to this attack.

It's a little ironic that e.g. Arch, which actually shipped the malicious code to end users since it publishes changes so much faster, as shipped never would have executed the payload (because they didn't patch OpenSSH).


>It's a little ironic that e.g. Arch, which actually shipped the malicious code to end users since it publishes changes so much faster, as shipped never would have executed the payload (because they didn't patch OpenSSH).

The backdoor was designed to only be injected during the building of an RPM or Debian package. Arch never would have been impacted no matter what choices they made. They were trying to hit production systems while minimizing their potential exposure to other less important types of users beforehand.


But it's nice when your init knows when a service is ready and can start the other stuff that depends on that one… Otherwise you're in retry hell.


Honestly, I'm a bit curious why sshd startup would ever be significant. Not sure.

The most intensive operation I can think of that might happen at sshd startup would be host key generation, but at least in NixOS it looks like this is actually handled in an ExecStartPre= script, which I believe means it will happen before After= units execute.

Of course I sincerely doubt that distributions decided to add sd_notify support just for the hell of it, so I am sure there is a reason, it's just not overtly obvious, especially considering plenty of distros don't do this and sshd (and dependent units presumably) certainly seems to work absolutely fine.


Well I guess CI automations could use that.


So on the one hand libsystemd provides what both you and the manual page describe as "useful" functionality for implementing this feature, but on the other you're implying that Debian shouldn't have used it to do what it's designed for?

Maybe libsystemd shouldn't provide sd_notify then, if you're not supposed to use it?


No, I never said you should or shouldn't use sd_notify. I said that Debian and Fedora added it downstream, that it was not a decision of OpenSSH.

Generally, I think using libsystemd is probably a good idea, if you are a C program that wishes to have support for sd_notify. Even better, even before they moved things to dlopen, the dependency chain was very clean; nobody would've batted an eye at the idea that something, even something as critical as the SSH daemon, would have a very popular and well-regarded compression library which is already trusted in many package managers loaded into the process space. There is absolutely nothing unreasonable about considering that to be trustworthy. Your sshd probably has zlib linked in and it is fine.

Seriously. We're not talking about the difference between zero dependency and one dependency, but rather something more like 34 dependencies vs 38 dependencies. Here's my sshd, noting that libtree excludes some glibc stuff by default:

    $ libtree `which sshd`
    /run/current-system/sw/bin/sshd 
    ├── libgssapi_krb5.so.2 [runpath]
    │   ├── libkrb5.so.3 [runpath]
    │   │   ├── libk5crypto.so.3 [runpath]
    │   │   │   ├── libkrb5support.so.0 [runpath]
    │   │   │   │   ├── libkeyutils.so.1 [runpath]
    │   │   │   │   └── libresolv.so.2 [runpath]
    │   │   │   ├── libkeyutils.so.1 [runpath]
    │   │   │   └── libresolv.so.2 [runpath]
    │   │   ├── libcom_err.so.3 [runpath]
    │   │   │   ├── libkrb5support.so.0 [runpath]
    │   │   │   ├── libkeyutils.so.1 [runpath]
    │   │   │   └── libresolv.so.2 [runpath]
    │   │   ├── libkrb5support.so.0 [runpath]
    │   │   ├── libkeyutils.so.1 [runpath]
    │   │   └── libresolv.so.2 [runpath]
    │   ├── libk5crypto.so.3 [runpath]
    │   ├── libcom_err.so.3 [runpath]
    │   ├── libkrb5support.so.0 [runpath]
    │   ├── libkeyutils.so.1 [runpath]
    │   └── libresolv.so.2 [runpath]
    ├── libkrb5.so.3 [runpath]
    ├── libcom_err.so.3 [runpath]
    ├── libk5crypto.so.3 [runpath]
    ├── libz.so.1 [runpath]
    ├── libcrypto.so.3 [runpath]
    │   └── libpthread.so.0 [runpath]
    ├── libldns.so.3 [runpath]
    │   ├── libssl.so.3 [runpath]
    │   │   ├── libcrypto.so.3 [runpath]
    │   │   └── libpthread.so.0 [runpath]
    │   └── libcrypto.so.3 [runpath]
    └── libpam.so.0 [runpath]
        └── libaudit.so.1 [runpath]
But there is one thing that is questionable, and that's whether or not downstreams like Debian and Fedora should really be making patches like this that add new dependencies to security-critical programs like OpenSSH. It's one thing if OpenSSH takes on this dependency itself, but downstreams adding it is scarier specifically because it's so unexpected. If you didn't know the way OpenSSH was packaged specifically in Debian and Fedora, you would have absolutely no way of figuring out that liblzma5 could be in its process space and thus it wouldn't be part of the threat model. Certainly the upstream developers would have zero chance of ever noticing this.

Of course, I'm sure Debian and Fedora have their reasons for the specific patch that adds sd_notify support, it probably does improve something somewhere, but this incident absolutely showcases how the consequences of innocuous-looking patches like that can domino into something absolutely devastating in a non-trivial manner. I strongly suspect whatever it improves on would not have been worth it if the backdoor had succeeded in proliferating. Nothing is a panacea, but you definitely gain some advantage by sticking to configurations supported by the upstream where possible.

That said: Debian and Fedora already (largely) dodged this bullet, libsystemd already (even before this news) plugged the hole that made it possible in the first place, and everything is fine for now. Debian and Fedora thus have no real reason to remove this patch other than if their sensibilities regarding it have changed, because the risk that it posed is basically gone now. Now for this specific patch to pose a threat, you'd have to compromise either libsystemd or libcap2. On any distro that uses systemd to begin with, compromising systemd already gets you root, and libcap2 IIRC is maintained by the Linux kernel folks, so of course if anyone compromises that, it's way, way game over already.

Is this some grand lesson about systemd bloat potentially enabling a horrible sshd backdoor? You can read it that way, but it's a pretty silly take, considering all of the factors. But if it makes people who get emotionally attached to hating init systems happy, more power to them I suppose. As for normal risk analysis, there is no particular factor here that completely dominates for what really enabled this. It was a combination of decisions that, on their own, are completely justifiable and reasonable, but when combined, led to near disaster. Not a new story and not one that's going to stop occurring either...


While I agree generally about systemd, this line in particular is not even correct. You don’t have to link libsystemd to run as a systemd service. Using the notify feature is optional anyway, but you can do it without linking to that library.


Would an OpenBSD-style minimum functionality Unix philosophy designed init system stop a developer from taking over as maintainer of a different upstream project, allowing them to submit malicious patches?


How would you handle dependencies, port, socket, timed triggers etc?

10000s lines of boiler plate code? Very easy to skip something in there. Why do we need to repeat everything?

The problem is exactly that.. preprocessing, build scripts etc.

Everything is a script and is therefore executable instead of a configuration file/statement which only increases the attack surface


The grandparent comment was about how the issue was actually systemd. My comment is pushing back against that. As you point out, that isn’t a simple swap out. The replacement to systemd can be as complicated and intertwined as systemd is today if not more.

But what init system you use is irrelevant to the issue of what do you do when an upstream project is taken over by a malicious developer?


> The grandparent comment was about how the issue was actually systemd.

It's not my view that "the issue was actually systemd". The issue was a complex, sophisticated attack by someone taking a long view. But the attacker injected his code into the system by manipulating part of systemd into loading a fairly obscure compression library that he had hacked. Systemd is of course more-or-less omnipresent these days.

Why does an init system need to load libraries at all? Well, it doesn't, unless it has arrogated to itself much more functionality than PID1 should ever have.


The init system was utilized in a supply chain attack. Fixing the init system helps but is only a bandaid to the issue of supply chain attacks.


libsystemd isn't the init system, it's a shared library


> How would you handle dependencies,

I don't think most people mind system as a service manager, if that's all it did.

> port, socket,

That's inetd.

> timed triggers

That's cron.

> etc?

Etc.


The backdoored library was linked to libsystemd, not systemd itself.


The dependency is attributable, in the largest part, to systemd's neoplastic aggrandizement of userland infrastructure and associated plumbing, making this a distinction without much of a difference.


"everyone should just reimplement LZMA!"

What could possibly go wrong. I'm sure there's no history of compression tools having serious vulnerabilities due to implementation errors...


This is another furphy, because OpenSSH proper neither requires nor uses xz/lzma. It's made clear in Andres Freund's original report¹ that the libsystemd dependency dragging it along arises from distros patching openssh to support systemd notifications. The sad part is that systemd notifications are just a datagram on a socket, so using libsystemd for this is reminiscent of Joe Armstrong's banana.

[1] https://seclists.org/oss-sec/2024/q1/268


As many have already pointed out, the library can also be linked to sshd via selinux.


I've seen that ambit claim too, but I'm not even sure what distro(s) it is referring to since I'm unable to confirm it on any host where I have ldd casually to hand. Ref however https://seclists.org/oss-sec/2024/q1/356


That packaging error makes liblzma being pulled in at installation (well, it's probably already there if pid 1 requires it). But it will not make the sshd binary use it. So I think the original claim stands: Without patching sshd for the notification it will not use liblzma.

Disclaimer: I did not search for all possible occurrences of dlopen().


https://github.com/proposal-signals/proposal-signals

> libselinux does not link to liblzma. It turns out the confusion was because of an old downstream-only patch in Fedora and a stale dependency in the RPM spec which persisted long-beyond its removal.


I get the sentiment. zstd is just better, though!

Other than that I did try a manual port (of zstd) to Java but I was not pleased with the results.

The other part is that systemd uses plain unix sockets with the most basic of protocols (that part along with docker forwarder was doable)


"the car bomb was build into the spare wheel not the car itself...."


"... and although the spare wheel is included by default, the car is modular and you can always remove it yourself"

[By dismantling the car completely and reassembling it, since you'd have to rebuild from source...]


Most cars no longer come with a spare wheel, certainly by default. Not sure if that’s because cars don’t get punctures any more due to the great state of the roads here in the U.K, or because people just can’t even change a wheel.


It's to help meet emissions targets according to the garage I bought from. The spare wheel weighs more than not having one.


Not in most systemd distros though. Those include all kinds of spare wheels that automatically take over your actual wheels.


You know they just ran ldd on sshd and picked one of the results as target right? If it wasn't that it'd have been a different one…


Really

I get the idea of systemd and I do think that the Unix principles were good for the 70s but maybe not so great today

Still in the true LP fashion, a lot of design decisions are not being thought with stability or longevity in mind


Its acceptable for avahi and pulseaudio because you don't need to use them.

With systemd, its a big problem because systemd is widely distributed. Systemd failed on its promise to create a new standard for defining services. Right now a ton of projects ship their own supervisor (runit, supervisord) or docker compose file to reduce their contact surface to systemd. Look what GitLab Omnibus does.

But with everything connected to systemd (udev, dbus), anything without systemd is sort of second class in terms of being tested. Ideally i would have the stability of Debian without the surprises from systemd. I tell people to "Press Ctrl+Alt+Del 7 times within two seconds" way too often.


LP?


My guess is „Lennart Poettering“.


I assume Lennart Poettering, creator of systemd et all.

https://en.m.wikipedia.org/wiki/Lennart_Poettering


You know that if it hadn't been a library linked by systemd, it would have just been another library right?

Your hastiness to hate on systemd made you forget what we're even talking about…


Maybe

But why in tarnation they need libxz to talk to systemd over dbus? Harm reduction should start there

And also maybe let's rethink dbus or bring it to a modern standard and not be just Corba for geeks.


Yes let's rewrite everything… that will fix security! /s


This backdoor would've been caught eventually because the added latency is substantial. It wouldn't have occurred on Arch Linux, BSD, macOS, any Solaris derivative OS (Illumos etc).

We should be grateful it got caught so quickly. I sent Andres a honest thank you email. It isn't financial (and I am just one individual) but it felt the least I could do.

If there is a way to donate, I would. This person could've earned more via HackerOne or black market and instead went with an arguably better path. I don't think we can compete with the latter though, unless we start treating this for what it is: a weapon.


I am grateful too, but I don’t think that this person could have earned more via blackmarkets. This backdoor in ssh was NOBUS. You cannot exploit it unless you have the private key.


It might be worth it to analyze a bit more.

Like, what would be different if the software would be closed source and developers payed by companies? I think it would be at least as hard to notice such an exploit and sometimes it might be easier (if the company is located in your jurisdiction).

Maybe the current mindset of assembling programs could be improved. There is a trend in some architecture to separate everything in their own container and while I don't think it can be directly applied everywhere that model gives more separation for cases like this. Engineering is an art of trade-offs and maybe now we can afford making different trade-offs than 30 years ago (when some things where decided)


… and everything old is new again.

DJB’s qmail, written in 1995, was made of 5 distinct processes, owned by different users, with no trust among them. Coincidentally, it was the MTA with the best security record for more than a decade (and also the most efficient one).

It would have likely had a similar record even if it was only a monolithic process - because DJB - but it was built as 5 processes so even if one falls, the others do not.


The problem is most developers and companies simply don't care, or are even hostile to improvements ("this is not the Unix way"). We had SELinux for over two decades. We dan do even more powerful isolation than qmail could at the time, yet nobody outside Red Hat and Google (Android/ChromeOS) seems to be interested. Virtually all Linux distributions largely rely on a security model of the 70ies and a packaging model of the 90ies. This is compounded by one of the major distributions providing only 'community-supported security updates' for their largest package set (which most users don't seem to know), which unfortunately means that a lot of CVEs are not fixed. A weak security model plus outdated packages makes our infrastructure very vulnerable to nation state-funded attacks. The problem is much bigger than this compromise of xz. Hostile states probably have tens if not hundreds of backdoors and vulnerabilities that they can use in special cases (war, etc.).

It's endemic not just to open source. macOS has supported app sandboxing since Snow Leopard (2009), yet virtually no application outside the App Store (where sandboxing is mandatory) sandboxes itself. App sandboxing could stop both backdoors from supply chain compromises and vulnerabilities in applications in their tracks. Yet, developers put their users at risk and we as users cheer when a developer moves their application out of the App Store.

It's time for, not only better funding, but significantly better security than the 70ies/80ies Unix/Windows models.


> no application outside the App Store (where sandboxing is mandatory) sandboxes itself.

Usually applications distributed outside the app store would simply not work (or be limited to close to useless) if sandboxed.

My pet example these days, DaisyDisk, cannot show what takes 10-30% of my space in the app store version. And can't delete protected files in Applications etc.

Which would be nice if it were a malicious free to play game, but it's an application that graphically reports what's taking space on your computer and optionally deletes stuff that you've chosen. So it simply can't work well inside the sandbox.


Usually applications distributed outside the app store would simply not work (or be limited to close to useless) if sandboxed.

I disagree. Sure, there are some applications that need to be distributed outside the App Store because they need additional privileges (like DaisyDisk), but there are many applications that are distributed outside the app store that could be sandboxed. Just to give some examples, why do Discord, Signal, Obsidian, Dash, or 1Password have unsandboxed processes? (1Password was in the App Store and sandboxed before it became an Electron app.)


Well we can't blame Electron for this, as much as I'd like to, since there are Electron apps in the app store.

Discord asks for the accessibility option to read system wide keystrokes for push to talk. Can sandboxed apps do that?

Also, no matter how secure the app store is, i'd very much like to be able to install applications without -ing Apple's permission. So having something available in the app store doesn't give me a warm fuzzy feeling.


> Discord asks for the accessibility option to read system wide keystrokes for push to talk. Can sandboxed apps do that?

You can register a shortcut in the sandbox, you cannot read system-wide keystrokes from the sandbox.


Some key apps on macOS do sandbox themselves, most obviously Microsoft Office.

The main issue is that the tooling around sandboxing is poor. If there are violations the best you're going to get is an opaque message logged to an unbelievably verbose log torrent. Also, the low level sandboxing APIs you need to really batten things down for internal components aren't documented. Chrome uses them anyway, and they just have to rely on the fact that they're so big that Apple won't break them. But if you're smaller, it's a risk.


Playing devil's advocate: is the current level of security the biggest problem of the computer systems?

Just two (similar) examples that cross my mind: large monopolies (ex: thinking Microsoft in the 90's), imbalanced power between actors (ex: only nations states/huge companies can do something not a small group of hackers).

I think diversification and accessibility of the technology would solve more problems overall rather than just focusing on security. It is just hard to strike a balance between efficiency and diversity (ex: one Linux distribution might be efficient resource-wise but it is not diverse; how many and how different would be diverse enough?)


it was also essentially unusable without a crapload of third party patches that DJB would not include into the master release, but yes it was quite secure :-)


And it was highly vulnerable to denial of service attacks. It didn't check if the mailbox was valid during the envelope phase, so it would queue basicaly everything, then check the mailbox and send a bounce if necessary. Sending thousands of messages to random boxes (dictionary spam attack) would queue thousands of bounce messages that would be rejected by the (faked) sender domain, bringing the Qmail server to it's knees. As me how I know this...

Thing is, in most companies, is cheaper and more efficient to deal with a sporadic vulnerability than having your e-mail system DOSed every other week.

This is the kind of compromises that normal people and companies have to do all the time, but radicals and cryptopunks like DJB can't seem to understand. Sure, he's a brilliant mathematician and cryptographer, but his grasp of reality outside academia seems very flimsy, IMO.


My qmail setup in 2000, on a humble beige box, was occasionally under a “thousands of bad addresses” attack, but I only found out about it a few days later while reviewing the logs. There surely was a threshold where it would be down on its knees - but “thousands” and even “tens of thousands” wasn’t it. The exchange server it replaced, though, would crash and burn very often, for a variety of reasons.


Does any private Microsoft/Google/Apple/whatever program have any backdoors? We don’t know and we will never know.

At least with open source we are able to detect them.


I don't think this is necessarily true. People do a lot of reverse engineering of proprietary OSes and a lot of vulnerabilities are found that way (besides fuzzing). And the tooling for reverse engineering is only getting better.

Also, let's not forget that this particular backdoor was initially found through behavioral analysis, not by reading the source code. I think Linus' law "given enough eyeballs, all bugs are shallow" has been refuted. Discovering bugs does not scale linearly with eyeballs, you need the right eyeballs. And the right eyeballs are often costly.

If your implicit premise that having the source code available makes it easier to analyze code than closed source, you can also flip the argument around: it is easier for bad actors to find exploitable vulnerabilities because the source code is available.

(Note: I am a staunch supported of FLOSS, but for different reasons, such as empowerment and freedom.)


Yes, and they are eventually discovered by reverse engineering.

Example: https://en.wikipedia.org/wiki/NSAKEY


Google's Fuchsia OS looks promising but it doesn't look like (or something like it) will go anywhere until the world accepts that you probably have to build security into the fabric of everything. You can't just patch it on.


I think Google has given up on Fuchsia, outside some specific domains, right?

I think currently even Android, iOS, and ChromeOS have far better security than most desktop and server OSes. I think of the widely-used general purpose OSes, only macOS comes fairly close because it adopted a lot of things from iOS (fully verified boot, sealed system partition, app sandboxing for App Store apps, protected document/download/mail/... directories, etc.).


QubesOS is the closest we have, in terms of better security model, in the desktop area


There isn’t much closed source software which is depended on as heavily as things like xz. The only one I can think of is Windows, which I think it’s safe to assume is definitely backdoored.


Companies are infiltrated all the time [0]

However the incentives even if a company detected the infiltration is to keep quiet about it. Lets say that a closed source "accessd" was backdoored, a database admin notices the new version (accessd 2024.6 SAAS version model 2.0+ with GPT!) is slower than the previous version, they put it down to enshittificaiton. Or they contact the company who has no incentive to spend money to look into it. There's no way the database admin (or whoever) can look into the git commit chain and see where the problem happened.

[0] https://rigor-mortis.nmrc.org/@simplenomad/11218486968142017...


Gonna have a serious talk with my mother about not trying to hack into opensource software that powers most of the world's software. I know retirement is boring, but that's no excuse.


Some good observations here, including about the apparent acceleration of effort and ensuing sloppiness due to impending changes to systemd that would have prevented the particular attack vector.

Unfortunately, the sarcastic tone starts to become a barrier to separating signal from noise about halfway through. Okay, you’re super clever, the NSA is a threat, too, we get it. Security vendors are largely hucksters, fine. What were we talking about again?


For me, the critical take-home message is:

"the issues around tens of thousands of unpaid developers writing the core of the world’s most critical infrastructure nowadays."

We need to take opensource software sustainability much more seriously.


I wonder how commercial software would compare in such a situation. Assuming you have a bad actor among the employees or a breach in the security. Not a lot of eyes watching that sourcecode i'd assume.


Depends what you mean by commercial software. Consider Google. Not gonna be affected because they build everything internally themselves and for the little open source code they rely on, they replace the build system when they import it to their monorepo (or at least they used to).

For everyone else? Not much different to open source.


It's likely that the commercial software uses a lot of open source libraries. Maybe with attribution, maybe not. Decent chance they periodically download whatever the latest version is and link it into their software without reading it.


Haha, sourcecode routinely doesn't exist to be scrutinized (despite company policy). Just modify the code in prod who's going to go poking around after you there?


> Assuming you have a bad actor among the employees

What if the entire company is a bad actor?


Which of the FAANGs are you referring to specifically?


Yeah, less eyes on it means it would be less likely to be caught, but there's also much more upfront validation on who gets to change it.

Although, I can imagine a sub-sub-contractor identity being somewhat easily forged, it's still harder than just creating a GitHub account.


> but there's also much more upfront validation on who gets to change it.

Very doubtful in my eyes. Very few companies have the strict validation which would be required to catch this.

Good validation that goes into central components is often skimped on arbitrary deadlines, which companies are full of. On something tangential like this I suspect nobody would really notice.

I can also hear the "I noticed now there's an extra delay which isn't supposed to be there, can I investigate?" "Sorry, but this is not critical for this deadline" agile mentality.


Not sure we're talking about the same thing. By validation I mean of identity. On proprietary software , before an attacker get inside access to the code, they would have to interview, get hired, submit (presumably, fake) document id, provide (again, presumably, fake) bank details, etc. This attacker just had a GitHub account and email, as far as understood it.

But, as I said, maybe tricking a sub contracting company into hiring you is not as hard. I remember working with contractors whose faces I've never seem on video, let alone in person.


I didn't think of "identity" in this sense, but I don't see this as a show-stopper either.

On my current jig developer churn is not high, yet I've only recently met developers hired 6+ months ago. I know first-hand only a handful of the committers I see. Barely know the most common commiters. I generally do watch commits of the trees/projects I'm interested into, but I'm a minority, and such behavior wouldn't catch something similar to the xz situation unless I'm absolutely lucky.

This also ignores the fact that you can just as well corrupt a current employee.


You might not know them, but HR does. No way your employer is sending them money every month without a reasonable degree of certainty that they are who they say they are. Or, at the very least, that they aren't 3 hackers in a trenchcoat.

And corrupting an employee doesn't sound that easy, either. I mean, we do get paid above average.

That still leaves shit third party contractors and compromising employees computers/accounts, though.


> also much more upfront validation on who gets to change it.

Some underpaid indian contractor you mean?

I'm sure they'd never ever possibly accept a bribe!


I can confirm the statement about the security industry. I just opened LinkedIn and my feed, along with two spam dms, are companies trying to imply their product had always detected this.


It's a damning indictment of the computer security industry, IMO.

They push wildly complicated and performance-costly bandaids for vanishingly unlikely exploits for dubious benefit, and pay very little attention to holes like this you can drive a truck through. https://xkcd.com/538/ is very fitting.

The fact a database hacker found it, and it missed all these supposed scanners and auditors and security experts is the icing on the cake. But even if it was one of them that found it, it's already in the wild.

The technical programming tools and techniques for security are important don't get me wrong, the security industry just seems to have very little concept of cost-benefit, the big picture of how attackers and their targets operate and what motivates them. They seem to mostly exist to justify their own existence.

I've already seen "security experts" wheel out the big scary (and meaningless) term "nation state" over this, which makes me laugh. Sure it could have been the CCP. It could also have just been some bored kid in his parents basement, that just wouldn't look good to admit though.


I don't agree with you fully. Look at solarwinds for example,or many other supply chain attacks, they were discovered much later after succesful abuse.

The public availability of the software helped catch the attack much faster than commercial software. Even if there was intensive scrutiny of changes to the project, a person familiar with the process can still come up with hard to detect backdoors.

Future backdoors may be stealthier but what this case demonstrated is that even a database hacker who doesn't do security audits could catch it simply because it's opensource. The expectation should be, such backdoors would be detected many months or years after the fact if they were your typical closed source popular application.

This is a case for open source software usage and funding. The security industry can't do much in terms of prevention against a malicious insider that knows the codebase more than any outsider. And opensource or not, people can get paid or implanted to sabotage software.


I imagine though that in this case the attacker has a much simpler time staying anonymous, vs commercial software using paid and vetted employees. Or am I missing something about how this was contributed?


Check this for example: https://www.zdnet.com/article/cisco-removed-its-seventh-back...

It's not always obvious but devs adding backdoors and vulns is not all that new.

The guy may have been anonymous here but a legit dev's github account compromise could lead to the same outcome.

Each open source project decides how much vetting is applied to contributors. I don't think you can contribute to Linux without using your real name and email for example. In some countries, getting a job for the express purpose of sabotage is very common. People using stolen id's to get remote dev job's is also a thing (although I haven't heard that being abused for backdooring). At least with open source, you can audit the code for anonymous user contributions and look at their policy for it.


> The fact a database hacker found it, and it missed all these supposed scanners and auditors and security experts is the icing on the cake

It's been reported that both Fedora and Ubuntu's valgrind rigs flagged this, actually, which is why the package changes hadn't propagated to those distros yet. It's true that they didn't get as far as recognizing the root cause because Andres beat them to it, but the security infrastructure was absolutely doing its job.


Which seems even worse for the security industry. They had these tools, didn't understand the warnings so ignored them and pushed the things upstream anyway.

It wasn't just that they were beat fair and square on equal terms -- they had opportunity before the packages got into their distro-upstream, and they (allegedly) are the ones who look for and audit security issues, the database guy found it by observing some peripheral performance issue it caused, and pursued and tracked down the problem. A stark contrast to the uncurious attitude of the security theater that was supposed to be actively looking for these things and examining warnings from tools.


> so ignored them and pushed the things upstream anyway

Again, that happened only in Debian testing and Fedora rawhide (and maybe a few other downstreams, though really the exploit requires a particular systemd setup and won't work on arbitrary "linux" systems). Those are rolling release variants deliberately intended to take upstreams rapidly with minimal review, precisely so that integration testing can be done. And it was, and it flagged the issue via at least one symptom.

Only one person gets the cookie for finding any given problem, usually. And this time it was Andres, and we should absolutely celebrate that. But that doesn't mean we run off and shit on the losers, right?


> Again, that happened only in Debian testing and Fedora rawhide

Right, lots of failures happened. I'm not sure the point.

> Those are rolling release variants deliberately intended to take upstreams rapidly with minimal review, precisely so that integration testing can be done. And it was, and it flagged the issue via at least one symptom.

And nothing was done about it.

> Only one person gets the cookie for finding any given problem, usually. And this time it was Andres, and we should absolutely celebrate that. But that doesn't mean we run off and shit on the losers, right?

The problem isn't that they were not the first to find it, the problem is that they weren't even in the race. And people and processes are not be immune to criticism for failures, so we absolutely can "shit on" the failures at many levels that have helped to make this situation possible.


I don't think a "bored kid" could pull off such a sophisticated attack while being patient for almost a few years


I think they could. By kid I mean roughly mid teens to mid twenties. I've seen what they can do -- create Linux, for example.


> while being patient for almost a few years

could have become bored, forgot and then "rediscovered" it


aye. the P in APT is for "Persistent". cuz that's what APTs are.

play the long game, low and slow.


A bored kid would be bad from the potential for this type of threat to be more prolific.

A nation state would utilize this attack chain to steal copious mounts of sensitive data and prepare infrastructure for coordinated attacks on critical infrastructure and intellectual property.

The threat is relevant because it informs us about their strategy. If you expert security experts to prevent this, they need to know why they’re preventing and have some concept of the realism of any given notional threat. There’s aren’t enough of us to address every threat.


Aside, nation-state is a specific geopolitical term. I don't know why it started to be cooped by the security circus, perhaps just an attempt to give themselves more gravitas. They just mean country or state, right? Even that doesn't really say anything really quantitative or useful about security anyway.

And I really disagree that it matters, whether it's a bored kid, an underhanded corporation, an organized crime ring, a terrorist group, or a government security agency, doesn't really matter that much. Some understanding of motivation sure helps, but they don't need to play CIA agent. It's not like the PLA would use this type of hole, the FSB a different type, ISIS would go some completely different route, etc. They're all just looking for ways they can infiltrate, sabotage, deny, steal, etc. So just work on the security problems, roughly with priority from the cheapest cost to benefit, to the greatest.

You don't need to know anything about nation states or any of that claptrap to know that rogue maintainers and contributors are a major problem. And you don't even have to know that to know that linking your privileged sshd process to libsystemd is a stupid idea. Yet millions are spent on ever more esoteric and complicated hardening and speculation issues and things that have never been shown to stop feasible remote attacks.

Probably because those are cool and techy and working on people problems is much harder and less interesting to many tech types.

Not to say don't do any of the technical stuff, but the calls for more funding of OSS I'm hearing won't solve problems like this if the funding goes to more of that kind of thing. It's not a funding problem it's a spending problem. What is desperately needed is some exceptional people who have big picture understanding of the problem, including good people-skills, to hold the purse strings and set some direction.


not just some DB hacker, dude is a principal engineer at microsoft.

but the point generally stands: how did a random user, even if a power-user, find this when auditing and processes didn't?


I didn't say "just" some DB hacker. He's almost certainly much more skilled than most "security" people I would say.

But he wasn't actively looking for security bugs nor gives off security researcher hubris.


"We need to get rid of the Windows 98 machine running the CNC machine that can't access the internet, it's a real security risk to the company"


Ok. You shit all over security experts. Now what?


> It’s also important we keep things in perspective, e.g. this one was spotted and averted.

My worry is, how many have not been spotted and have not been averted? From my reading of this one, it was blind luck that it was spotted.


I just opened LinkedIn and saw a post from Satya Nadela congratulating Andres.

Clicked over Andre’s profile and saw he works at Microsoft. I am a bit surprised that no article or HN post about this prevented hack mentions this fun fact.

(Yes we should still buy this guy a coffee but he’s also a principle engineer at MS)


It's one of the stranger turns in computing history that Microsoft has turned into one of the most reliable security presences for skullduggery like this.


given how much of the IT landscape they own, they damn well better be security experts.

or how much damage they`ve done in the past to said landscape due to poor security.


>OpenSSH runs...

Not every OpenSSH build it's linked against xz. OpenBSD's one isn't.


Correct. OpenSSH doesn't care about linux or systemd. Makes me want to switch to BSD now


Arch Linux uses systemd, but does not patch sshd, so its not vulnerable to the known backdoor either.

However, to my knowledge nobody has fully analyzed the malware binary yet. So we don't know whether it only contains a single mechanism to attack sshd or whether it also has other harmful components.


Well there is always Slackware, no systemd and unpatched sshd. win-win :)

And correct, this was not an issue for Slackware Current (which has xz 5.6.1). Slackware 15 has xz 5.2.5


Looking at how Jia75T got access to projects, it was vacuous sock puppets creating fake pressure. https://boehs.org/node/everything-i-know-about-the-xz-backdo...

It feels like we should have better socialized defense against this kind of thing. When I see a name pop up online, I want an immediate readout on how active this person is over time and where. We need better tools to get high level views of who we are dealing with.


Makes me wonder how many backdoors like this are in the wild, undetected


I would imagine quite a few. It would be prudent to start paying attention to the tools that were created to catch some of the issues(like how people ignore Valgrind errors) as well as commits made to OSS that disables some feature, because it breaks a 3rd party library.

>> odd valgrind complaint in automated testing of postgres

This implies these kinds of complaints are routinely ignored.


The point about owing Andres unlimited beer is a good one. He should get something good out of this!


> We should stop using open source and only buy American vendor products! Yeah, good luck with that.

This isn't a good take. This presents proprietary software as offering superior assurances against backdoors, if only it were practical to completely avoid Free and Open Source software. The truth is approximately the opposite: code being Free and Open Source should be considered a necessary condition, but not a sufficient one, for resisting backdoors.

> We should start funding every open source project! Yeah, good luck with that.. I’ll start saving for my trip to Mars.

This kind of defeatism is unhelpful.


> This presents proprietary software as offering superior assurances against backdoors

In the image below that remark the author makes clear that proprietary software is often subverted.


How does one buy Andres a beer?


All in all, am I not vulnerable if I'm running an init system other than systemd?


No, you are likely still vulnerable if you have this versio of this library and a typical sshd instance, I believe it's about the linking rather than the init system.


Remember that the vulnerability was introduced by distributions patching sshd to talk to systemd. It doesn't make sense for a distribution to patch sshd like this unless they use systemd. Thus, choosing another init system may save you indirectly, even though you're right that this is really about the linking.

Ultimately, the attacker deemed systemd to be common enough to consider it as the way to hijack sshd, but indirect enough to avoid discovery by audit. It says something about software monoculture.


Quick, someone link that xkcd where a crucial open source technology is being maintained by basically a single person, while billion dollar companies rely on it.


So, uh, who did do it?



What an uninspiring take, there is literally nothing we can do, the status quo is fine, the abuse of unpaid devs of open source is a harder problem to solve than commercial space flight to Mars. It's fine to state this is difficult to solve but why be so defeatist about it?


It's much easy to do anything if you're honest about the reality you're working in.


Lots of arch users sensed the unusual slowness of ssh, but less would have gone that far because of naive trust of the community.


sshd in arch isn't built with the systemd patch, and thus experienced no slowdown. You can read about this on the arch linux blog: https://archlinux.org/news/the-xz-package-has-been-backdoore...


arch was unaffected as far as we know.


Just the one attack vector that we know of. There could be many, which are yet to be discovered


Eh? People have dumped strings and more from the attack payload. It's exclusively targeting openssh.

However they left the door open to future payloads by simply dropping new blobs into the test cases folder.


The XZ attack is (seemingly) exclusively targeting openssh.

Considering it's a library that isn't even a direct dependency of ssh and someone was willing to put in over two years of effort, it doesn't seem crazy to start wondering what other (small) projects have been suffering maintainer fatigue or are even just big enough where a "lgtm" response is almost guaranteed by a sane looking PR.

"Smuggle in unexpected code/blobs as test files", "bastardise make scripts to inject your code into the build" and then "have your code preferentially override [decryption] routines" aren't just (as of last week) unexpected, but would also seemingly be usable against any other project out there.

I'd suggest it's more like finding and killing a cockroach in your bathroom. Thinking that the roach problem is now gone for good and was only confined to the one room seems... optimistic at best.

I also think it's fun to wildly speculate on what else could be affected - or to play the game another way, "if I had control over sshd, what else would I want?"

From an outside point of view, I'd quite like web servers. They're often externally accessible when ssh isn't, are doing cert validation/compression/decryption and I'd wager on most boxes, could easily give you an ssl wrapped tunnel to (the vulnerable) sshd running on the servers loopback address without selinux blocking you.

Actually, having written that, has anyone looked at the SSH _client_ process rather than server? Surely it needs to be able to parse certs in the same way as the server does, and getting clients to leak their private keys somewhere seems like it would be extremely useful as well.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: