Hacker News new | past | comments | ask | show | jobs | submit login

It hasn't switched because Linus (1) doesn't think anyone would do that and (2) he sees hash collisions only as an accident vector not an intentional attack vector.

> You are _literally_ arguing for the equivalent of "what if a meteorite hit my plane while it was in flight - maybe I should add three inches of high-tension armored steel around the plane, so that my passengers would be protected".

> That's not engineering. That's five-year-olds discussing building their imaginary forts ("I want gun-turrets and a mechanical horse one mile high, and my command center is 5 miles under-ground and totally encased in 5 meters of lead").

> If we want to have any kind of confidence that the hash is reall yunbreakable, we should make it not just longer than 160 bits, we should make sure that it's two or more hashes, and that they are based on totally different principles.

> And we should all digitally sign every single object too, and we should use 4096-bit PGP keys and unguessable passphrases that are at least 20 words in length. And we should then build a bunker 5 miles underground, encased in lead, so that somebody cannot flip a few bits with a ray-gun, and make us believe that the sha1's match when they don't. Oh, and we need to all wear aluminum propeller beanies to make sure that they don't use that ray-gun to make us do the modification _outselves_.

> So please stop with the theoretical sha1 attacks. It is simply NOT TRUE that you can generate an object that looks halfway sane and still gets you the sha1 you want. Even the "breakage" doesn't actually do that. And if it ever _does_ become true, it will quite possibly be thanks to some technology that breaks other hashes too.

> I worry about accidental hashes, and in 160 bits of good hashing, that just isn't an issue.




I think it is worth noting that the quotes in your comment are from 12 years ago: http://www.gelato.unsw.edu.au/archives/git/0504/0885.html

I don't mean this to say that you are being inaccurate, just that his current position seems a little different now:

"Again, I'm not arguing that people shouldn't work on extending git to a new (and bigger) hash. I think that's a no-brainer, and we do want to have a path to eventually move towards SHA3-256 or whatever" http://marc.info/?l=git&m=148787457024610&w=2


True.

I was just answering the question "Why hasn't Git switched...People have been warning that SHA-1 is vulnerable for over a decade"

Linus' 12-year-old opinions are the relevant thing for why it hadn't changed. A decade from now, things may be different.


He seems to have changed his tune now that he can't behind the "that's only an imaginary possibility" cover: http://marc.info/?l=git&m=148787047422954

> Do we want to migrate to another hash? Yes.


Assuming you meant "hide behind", his original attitude seems to be more like "this is sufficiently unlikely in practice that I consider attempting to mitigate it in advance to be overengineering with a higher opportunity cost than it's worth". Which, well, I think when it comes to security stuff he has a nasty tendency to underestimate the risks and thereby pick the wrong side of the trade-off, but to me it's clearly a trade-off rather than something to hide behind.


I think it's a reasonable assumption that, as computing power increases, hash functions will be broken. Not that they have to be, but it's reasonable to assume that, and I think it's beyond short-sighted for Torvalds to have failed to build a mechanism in git for hash function migration into git from the very start.


Cryptanalytic research is the fundamental thing that broke SHA-1, not simply the increase in available computing power. So that's not really a 'reasonable assumption', if it was, we could 'reasonably' assume SHA-512 will never be broken.


The point remains, since computing power increases, and cryptanalytic research advances, we really should make sure software that depends on cryptographic hashes has a reasonable way to move to different algorithms. At the very least we could add as a prefix to the resulting hash the name of the algorithm that generated it when we store it.


The advances of research and computing power are vastly outpaced by basic things like digest size. If you came up with a complexity reduction of the order of the one developed against SHA-1 for SHA-256, you won't be able to find any SHA-256 collisions.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: