Hacker News new | past | comments | ask | show | jobs | submit login
The Case of the Modified Binaries (leviathansecurity.com)
193 points by pcwalton on Oct 24, 2014 | hide | past | favorite | 72 comments



Tried to download Mozilla Thunderbird lately? Have any luck finding a SHA checksum for it? Its hidden at: http://download.cdn.mozilla.net/pub/mozilla.org/thunderbird/...

You can't get it over SSL. Not to worry, the binary will be signed by Mozilla right? Yeah, GPG only. Not x.509 signed.

But hey, the online install page supplies it over SSL right? Well, sometimes. But it turns out they don't enforce SSL use. Cue SSLstrip.

PS On MacOS X 10.9 Apple by default prevents running unsigned binaries. Not to worry, Mozilla tells you how to bypass the check, not even hinting it has a very valid purpose. https://support.mozilla.org/en-US/kb/firefox-cant-be-opened-...


Uh, in case you didn't notice, those SHA checksums are also PGP signed.

I Googled Mozilla Thunderbird and the first hit was the download page, using HTTPS. You're right SSL isn't enforced, but that's a chicken and egg problem for Firefox downloads I guess, now that TLS 1.2 is enforced and the user may be stuck with a browser not supporting that.


In case you didn't read, yes I said they were GPG signed. But they don't make that info available (or linked) on the main download page, you have to google to find it.

SSL 3 is still fine for protecting integrity, just not confidentiality, so it is okay for downloads.



Up until recently even OpenSSL's own download page was not HTTPS.


Just like Mozilla's page, that makes perfect sense as long as the authenticity can be verified via other means, such as PGP signatures.

What use is having HTTPS to protect the very thing that you need to implement it. It's a basic chicken and egg problem, replayed every time OpenSSL gets compromised, and successfully circumvented by securing it with something else.

Think about it: there's an SSL break and you need to update OpenSSL. What good does it do you to have it available over SSL?


Because not all vulnerabilities are the same, and not all are total compromises. For example, BEAST was temporarily worked around by switching to use a stream cipher (RC4) instead of a block cipher. Everyone scrambled to update their OpenSSL version, then switched back, or they switched to an AEAD mode TLS 1.2.

Not that people shouldn't be using GPG, but only using it means you only protect the paranoid.


Tell that to building haskell from source. It depends on itself.


The first C++ compiler was written in C++:

http://www.stroustrup.com/bs_faq.html#bootstrapping

However, the first C compiler was not written in C, but in NB, an intermediate step from B to C:

http://stackoverflow.com/questions/18125490/how-was-the-firs...


I remember back in the day you used to be able to build gcc with various other compilers to bootstrap yourself. I think gcc now requires gcc features, so you're in the same boat.

But anyway, the comparison isn't very good because haskell/gcc don't develop sudden security vulnerabilities that instantly turn existing binaries unusable for getting new ones.


It's not enough to check the hash of a downloaded executable if the hash you're checking against came from the same source as the suspect file.

I find it so infuriating when I see a download page with hashes and the download links next to each other, as if that's any help at all.


Still better than seeing this as "installation instructions":

    curl http://not-ssl.somesite.ru/ba.sh | bash
Often without an https:// option available at all.


I've made a list here:

http://curlpipesh.tumblr.com/


The vast majority of your weblog postings describe https downloads, so it is mostly tangential to grandparent’s complaint and to the article. A .sh script may be a messier and less idiomatic file format than .pkg/.deb/.rpm/.msi, but a pkg from an unknown server is just as dangerous as a shell script from an unknown server.


Please add the case of Rust package manager Cargo, found at http://crates.io


Done. Thanks!


My personal beef is with curling HTTP (no -s) URLs - getting an installation shell script over HTTPS is not unreasonable (unless instructed to provide a no-SSL-confirmation flag, such as -k for curl, in which case it is more or less the same thing).

By default, `curl https://blah.blah/` will only work if the TLS certs are proper & validated. This isn't about trusting the author (you'll be running their code anyway, one way or another) but the transport medium (HTTP!s).

If I had your show-and-shame tumblr, I'd only include http:// links.


The hash is not for cryptographic purposes, it's to detect corrupt downloads. Ideally you should provide both the hash and you should pgp sign your releases.


That is bizarre. Like "I thought XMODEM went out of style in 1992" bizarre. Shouldn't checksumming the payload be performed at the application layer?

(A quick Google shows that indeed it can be: the Content-MD5 header <http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html>. Wonder how widely supported it is by HTTP software used by people who like to check hashes of things they download.)


Well, I understand your reaction, but it may help to point our that there are three different algorithms involved here and that to ensure content completeness and ensure security concerns (CIA, NR, etc) you want them all, and all done correctly:

  *  Checksums, like in Xmodem or CRC
  *  Cryptographic hashes (including MACs)
  *  Cryptographic signatures (ie OpenPGP key or cert)
As noted in the other comments these protect against different kinds of problems in transmission, but in used correctly in combination can protect from both glitches and active attacks.

To say that it is difficult to implement all of these correctly and in concert is a grave understatement, but this is what modern crypto software and network protocols that use it, have to do.

Now back to the thread on HTTP header checksums :)


I am not so sure why you think this is bizarre. In an ideal world TCP is reliable, HTTP assumes it runs on reliable transport so it doesn't do any more integrity checking.

In the real world there are bugs in TCP stacks and in HTTP implementations that cause HTTP traffic to get corrupted. I see this every day. Some applications do implement extra checking, most applications do not. Browsers and wget and curl can't implement any extra checking because the way you check is application specific. There is no standard way to do it; what you mention there is an esoteric feature.

Just for anecdotal fun, Logic Pro tries to download over 50GB of assets over HTTP (each individual file is many-GB). It has never worked for me over any of my networks (and in fact I wrote a tool to fix this).


> In the real world there are bugs in TCP stacks and in HTTP implementations that cause HTTP traffic to get corrupted

I'm very much aware of this. Hence why I expressed surprise that such checksumming was not commonly performed at the application layer.

> There is no standard way to do it; what you mention there is an esoteric feature.

The Content-MD5 header is defined in RFC 2616. It is, by definition, standard. If it's not widely supported, then I think that it would behoove the people who care about these things to switch to servers/clients which do support it.

(I suspect the intersection between "people who know/care how to use md5sum" and "people who know how to set/read an arbitrary HTTP header" is fairly large. Hence my surprise at the common practice of ignoring this capability.)


I'll note that even today "ASCII mode" in FTP haunts people who aren't careful when downloading -- and that it is a frequent reason I find myself having to urge MD5 checks of embedded firmware I work on. At least these days the firmware has enough room to validate a download itself before installing it.


Well what if you don't get the binary from the same source as the hash, or it's been sitting on a disk for a while and you want to double-check its integrity?


The post to which I replied specifically focused on the case where the binary came from the same source as the hash, and was being used to confirm the integrity of the download. I was not addressing other cases.

(Although I will note that file systems, like application protocols, should maintain their own integrity; however most do not. Which also seems bizarre given that it's 2014.)


Other programs could, accidentally or maliciously, step on the contents of a file. Filesystem checksums are awesome (I use btrfs), but it doesn't protect your file from everything. Hm... maybe a versioned filesystem where you can go back to the "original" version of a file and verify its checksum? :)


Yes, protecting from malicious / accidental modification is a separate problem than protecting from bit rot / gamma rays. The former is obviously much more difficult than the latter. My surprise stems from the fact that common tools are not robust against the latter by now.

(My day job is working with enterprise-grade content-addressed block storage, so maybe I've just set my data-protection expectations too high.)


Indeed, I find it surprising that there's no filesystem flag giving the opposite behaviour to "compress contents to save space": something like "transparently store this file bloated by a fountain codec to ensure integrity on an bitwise-unreliable backing store" or maybe "transparently generate a Parity Volume Set from this file and stripe it across the disk to protect against lost sectors."

Both of these options would protect your important files against failing single disks without having to do any RAIDing. The unimportant data (e.g. the OS itself, caches, etc.) could be reduced-redundancy, since it doesn't need to be captured in a disk rescue.


It would be a great feature, but obviously adds complexity. You can get half way there with single disk ZFS, which lets you store multiple copies. But the key feature is that files and directory info are checksummed, and regularly checked, so corruption cannot silently occur. But I think that usually the whole disk is lost, rather than getting bad blocks (so 90's).

For any archival DVDs I burn, I compress, then run through PAR2 http://en.m.wikipedia.org/wiki/Parchive


Do corrupt downloads really happen these days and what causes the downloads to become corrupt? I can't remember having a single corrupt HTTP download in the past 10 years.


Yes, they do happen. Not as often as 20 years ago, but they still happen. Personally I can't download more than about 1GB without an error. If you use bittorrent you might want to look in the logs to see how often it re-downloads blocks because the block it just downloaded has been corrupted. You might be surprised.

I don't know the root cause in general, but for me the most common cause has been bugs in the TCP/IP stacks of cheap home routers.


I've had some corrupt downloads from a bad wireless NIC.


The only people who provide either a checksum or a PGP signature for downloads are geeks. Those geeks usually distribute their files via mirrors, who all download from either the main source or other sources. Any of those downloads could have been corrupted mid-download (for a variety of real world reasons), so the checksum verifies the file integrity. The PGP key is there for the 1% of people who want to verify the file came from the author; it's up to you to figure out how to get the author's PGP key in a secure fashion (typically HTTPS from their website, or via e-mail).


No, but it's another step in the process for sure. It's not useless.


> It's not useless.

How not?


It's trivial for a MITM to patch all binaries he's intercepting, it's less trivial[0] to modify the corresponding hashes on web pages. I'm talking about automation here, not messing with a single web page, which a human can do in a couple of minutes.

[0] You've probably already sent out the unmodified hashes before you sent out the patched file. First idea: Keep a set of known binaries and a set of hash replacements. When you encounter a binary, first check if you've seen it before. If you haven't, deliver it unmodified, then compute hashes in a couple of known formats, and the hashes of the corresponding patched binary. When you deliver text content, check if any of the previously computed hashes match and replace them with your evil version. When you encounter the binary a second time, deliver the patched version, assuming that you either sent a modified hash or there was no hash in the first place.


To further improve your idea: you could detect links to binaries and look for hash-like strings next to them. Then you can pre-download, detect hash algorithm, patch, hash the patched binary and then serve the page with the modified hash (and cache everything for later use).

It might make the first visit slower, but for small binaries (or large bandwith) it's probably not that noticeable (especially in Tor, where slowness is expected).

Lots of times the hash is in a separate file, which makes the attack even less noticeable since you can patch and hash the binary as you detect it but before the hash file is downloaded by the user (and you can even delay the hash download as much as you need).

As I see it, downloading hashes through unsafe channels delivers a false sense of security, which is even worse than no security at all.


two examples: if the hashes are served over an HTTPS connection that you believe trustworthy, and/or signed by a key you trust and can obtain securely


The hash is comming from the same channel so, no it can not improve anything.

If it is an https connection where you trust the CA, you don't need the hash. The binary coming from that connection is just as safe. If it is signed by a key you can obtain securely, well, sign the executable for once, forget about the hash.


not necessarily--in terms of hashes, i was referring to the practice of serving hashes via TLS and larger binaries via http. i believe virtualbox does this, for example. in such cases as long as you trust the TLS connection used to obtain the hashes, you can reasonably trust the binary obtained insecurely as much as you trust your TLS connection and your hashing algorithm. and indeed, hashes could/should be replaced with signatures in those circumstances, and sometimes are (e.g. tor project, but they also serve tor browser binaries via tls).

most people most of the time trust signatures/hashes served over TLS. in cases where they don't trust the upstream CA (which is commonly influenced/chosen by the developers), there's also a good chance they shouldn't rationally trust the source of that code. i see your point technically, but in practice you're almost describing a "trusting trust" attack, which i don't think is the most common threat model for app downloads.

and i agree re: signed executables, but we live in a TOFU world and vanishingly few people personally verify fingerprints OOB with devs, so for many people not looking at trust paths and such, you're practically talking about trusting EFF's CA vs. mozilla's. perhaps that's a significant distinction to some, but i'd probably characterize it differently than you have.


And the server's not compromised.


this a valid concern but if that happens, significantly more people are screwed than those accessing the server via a bad tor exit


Really hate the Microsoft standard here of "unknown error" everywhere. Is it really that hard to give a name to those, MS? "File signature verification issue" would be a million times more helpful. This has always bothered me deeply with MS software; that it's almost impossible to tell what's happening from logs / error messages.


How would they get people to pay for support if the errors were explained in a clear language?


One quick google (but you do need the 0x) and you get to this documentation:

BG_E_VALIDATION_FAILED (0x80200053) The application requested data from a website, but the response was not valid. For details, use Event Viewer to view the Application Logs\Microsoft\Windows\Bits-client\Operational log

Which seems pretty clear to me (although plausibly not to an end user).


This exit node is now flagged as BadExit: https://atlas.torproject.org/#details/8361A794DFA231D863E109...


Even SSL may not help. Cloudfront is now offering what they call "Flexible SSL". This means Cloudfront gets an SSL cert which allows them to impersonate the site, they offer an SSL connection to the user's browser, Cloudfront acts as an man-in-the-middle and makes a connection to the destination site. An unencrypted connection in many cases.

This is SSL as security theater.


* Cloudflare

It's obviously not nearly as secure as end-to-end SSL, but it's probably still useful. The connection between the client's machine and the Cloudflare's server is more likely to be under attack (unencrypted Wifi, hacked personal routers, rogue exit TOR nodes, etc) than the connection between datacenters.


not always. as we've learned from the NSA disclosures, there are many layers of indirection.


Would you say its better than plain HTTP?

Or is the false sense of true security a bigger detriment?


Or is the false sense of true security a bigger detriment?

That's a really tough call.

Cloudflare makes no security guarantees. They don't even commit to keeping your public key secure when you give it to them. That's a bad sign. One wonders how they fund their free MITM service.


i think it really depends on your threat model


This is important; there are cases of "just plain insecure" but other than that, it's very nuanced.

How about you randomly generate and write all your passwords down on a piece of paper in your wallet? For many threat models, that's far more secure than even using a password manager. For other threat models it's far less secure than using a password manager. Other than things that are just flat-out broken "more-secure" and "less-secure" don't exist without qualification.


How is this different than trusting the website's own load balancer, which might terminate the SSL connection and relay unencrypted traffic to the servers?


The unencrypted connection between the load balancer and the backend server would be taking place behind a firewall.


Since we're talking about issues with bad devops (unencrypted between Cloudflare and appserver), the same bad devops could apply here too. It wouldn't be the first time that someone left an "internal" appserver on a public IP, or a firewall rule was miswritten, etc.


Why wouldn't SSL help? Unless the offending exit node has the requested site's cert, there's almost no way they can carry out a MITM attack on an SSL request undetected. That's kind of the whole point of certs.

Or is this an indictment of Cloudfront offering to be your SSL termination point?


This is an indictment of Cloudfront offering to be your SSL termination point, and using multiple-domain certs to do it. Here's the Black Hat paper on how to exploit that.

https://bh.ht.vc/vhost_confusion.pdf


> this is the only node that I found patching binaries

This suggests the exit relay itself is doing the patching. Isn't it more likely that some MITM between the exit relay and origin server is responsible?


I don't see why would it be more likely. The exit node explanation is simpler.


As he states, the way he found that node wasn't comprehensive. We shouldn't take that statement to mean there are no other exit nodes patching binaries. He just stopped after he proved it wasn't only a theoretical exploit.


That's entirely possible and wouldn't be the first time that an exit relay's upstream or data center is responsible for MitM attacks.


Companies and developers need to make the conscious decision to host binaries via SSL/TLS

Of course the problem with that is that countries with censorship like china seriously throttle or outright block any SSL connection that are made outside of the country. And sometimes they even use something like SSL strip to do a MITM attack with a self signed certificate.

Average users there are also used to seeing self signed certificates locally and so never even think twice before discarding a message alerting them that a SSL certificate is not valid.


Companies and developers need to make the conscious decision to host binaries via SSL/TLS

Yes. And source code, too. If you can't provide ssl for downloads, you should be using a third party service like GitHub who can.


> If an adversary is currently patching binaries as you download them, these ‘Fixit’ executables will also be patched. Since the user, not the automatic update process, is initiating these downloads, these files are not automatically verified before execution as with Windows Update.

Except that this FixIt binary will have no signature and Windows will light up like a Xmas tree. So it really comes down to whether you pay attention to these warnings or you don't. And if you don't, despite all of the Microsoft's effort of past 10 years, then you get what you deserved.


If you're already able to MITM the user, what about

- patching (more .. corrupting) binaries, hoping to break Windows Update

- intercepting and replacing the top 5 'Make Windows Update work again' downloads with a (signed if you want) application of your own?

Bonus points for injecting 'Already got this host' into requests from now on, so that Windows update magically starts working again..


I wonder how do security professionals acquire their knowledge. Even more curiously, how do these malware writers do this?

Programming can be easily learned by reading and practicing but IT security, one doesn't know where to begin, what the journey is like.


You start here: http://thelegendofrandom.com/blog/archives/223 or here http://www.reteam.org/ID-RIP/database/essays/es29.htm and spend hundreds of hours on reading and using debuggers.


omg that is so cool!!! so this is how 'serial crackers' work.


Same here. It always eluded me. Security seems like a field where there is still no definite way to get good at.

Being a good security researched requires (among many other things) ability to understand how things work. How ANY things work down to the LOWEST level. Idk about others but I always considered 'security guys' to be the elite of elites in IT world.


yeah i knew it boils down to some really low level knowledge but seems like a lot of black magic to me regardless on how people end up being able to publish papers or discover vulnerabilities...is it just educated trial and error ? poking at things that you think, hmmm maybe there's a hole here somewhere and then viola, you come across a CVE?


Creating malware is simple, creating good malware that stays undetected is hard, very hard.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: