Hashing in the client leads to a fair share of security issues, especially if it's not also hashed on the backend using the usual salted hashes.
There are protocols like SRP to do it securely, but it's non-trivial. And remember that such protocol is only useful if you trust the client implementation—it's kind of pointless for a third-party webpage or app.
Use randomized per-site password. Solves everything.
Hashing twice, on the client as well as on the server, won't introduce any security issues if the choice of algorithm is good, and it does further protect the users password from being harvested from passive MITM'd SSL like it is on some corporate networks. (Yes I know MITM means they could rewrite the client to steal passwords, but that's work) If I was designing a password auth scheme that highly prioritized user security, I would hash both client and server side.
>and it does further protect the users password from being harvested from passive MITM'd SSL like it is on some corporate networks.
It might protect the password if the user is reusing it elsewhere, but it doesn't protect the account the password is securing during the intercepted transmission.
You can't enforce complexity rules or check against known leaked passwords or that they don't reuse username as password or anything like that if you hash on the client too
Primarily to get around arbitrary password rules that do not enhance the security of the password but serve to weaken it, e.g. only use special characters from this list: !@#, or sorry your password is TOO LONG (?!)
That's true, but only to a point. You can actually server-side check username/password equality, and a not overly long list of other unwanted passwords. You just have to check each one.
If you hash in the client, attackers with dumped hashed credentials (from elsewhere, which might only hash plaintext) don't need to try to reverse/"crack" the hashes, they can just use them as-is. With users reusing passwords across sites, that would be pretty terrible.
That is why he said, we should do both: hashing client AND server side. Hashing on the client prevents sending plain text passwords to the server which could mishandle them eg. Logging them for instance like Twitter once did. Client side you can also check on registration if the user password is known to have been pwned (1password offers an api for that) and use a secure hashing algorithm. Server side the hash (which is then the password) will be salted and hashed again.
That makes the hash of the user's password equivalent to a plain text password. And that means all those leaked databases, which previously only contained hashes that needed to be bruteforced to be reversed, suddenly function as a source of plaintexts.
I think (s)he is saying to hash client side, salt, then hash again server side. So if the database gets compromised, the attacker still has to apply the salt to brute force the hash.
I can't see a reason why this hurts, but it's probably not worth it. The only benefit I can see it having is if the user reuses their passwords on other sites, as now a potential malicious MITM would have to brute force the client side hashed password in order to reuse it on other sites. They would still be able to use that client side hashed PW to authenticate to the service in question, though.
Yeah, thats exactly my point. So the MITM who got the PW after client side hash, but before server side salt and hash would have to brute force that hash in order to use it on those other sites. That's the only benefit I see, and a very specific set of circumstances.
The first sentence is true (and I can't believe how far down this thread it first appeared), but nobody was advocating abolishing server side hashing as well.
Basically, hashing on the client does absolutely nothing for the security of the transfer or storage of the password. Whatever the client passes to the server, whether it's hashed or not, is effectively their password, and intercepting that (again, whether hashed or not) would give an attacker the actual credentials of the user.
Oh, I see. Other websites give you the 'password' to this website.
But only if neither site uses salt, so salt all your hashes. (Not even a salt is necessary here, just a site-wide addition would be fine. Or honestly you could just use a hash that wouldn't be used by a site too dumb to salt their database.)
No - As mentioned in a comment elsewhere, these kinds of experiments fall dangerously into a category of duct-tape crypto.
If you want secure password transfer, do not construct a ghetto solution with stacked hashes, but use a proper protocol like SRP (which, again, is pointless unless the client is trusted).
Especially in the case of Twitter, doing this provides absolutely nothing in respect to credential confidentiality as both sites of the transaction is written by Twitter.
I don't get it. Assuming that a passivly listening attacker has broken TLS, additionally hashing at the client is helpful. The service at hand is now (most likely) insecure, but at least other services do not suffer from the plaintext password being exposed. It's an advantage, albeit small, but it's easy to do with virtually no risk of error.
Stacking hashes is not cryptographically sound. The additional hashing weakens the credentials. Cryptographic hashing is a very fragile subject.
"Assuming an attacker has broken TLS" - this makes no sense. If I had broken TLS, I could send you a login form with no hashing. I could modify your legitimate API requests, without the need for your credentials. I could take your token and forge requests as I wanted. All your assumptions go down the drain, as the security relied on the presence of TLS.
Basically, making this setup is generally harmful, while only providing negligible benefit in a doomsday scenario where all bets are off regardless.
If you truly want to have better security, you need an entirely different system. E.g., local private keys used to sign requests, or a proper password exchange/validation protocol. Smacking another hash on top does nothing good, and the idea is a good example of why normal people shouldn't crypto.
I've heard the "additional hashing weakens the credentials" line since I started programming, but no one ever bothers to link a citation, or if just reasoning simply from e.g. the pigeonhole principle seems to realize the increased likelihood of collisions is negligible for typical hash choices. It also flies in the face of common in-practice schemes (pre-bcrypt) like doing n rounds of sha256. There should be no real problem with doing the first round on the client.
I agree with you on the general pointlessness of client hashing, though, and oh what a world if we had pub-priv keys for authentication instead of passwords...
> I've heard the "additional hashing weakens the credentials" line since I started programming, but no one ever bothers to link a citation
The main issue is that security analysis is non-trivial due to the interaction of their security guarantees. The end-result is effectively a new hashing algorithm. Figuring out the properties of this new algorithm requires a new cryptoanalysis (which is more than just avalanche tests).
> It also flies in the face of common in-practice schemes (pre-bcrypt) like doing n rounds of sha256. There should be no real problem with doing the first round on the client.
Old practices are deprecated for good reason, primarily due to flaws. Looking at what we used to do it not that useful.
The main problem is that it's not just "n rounds". The hash on the client must be salted, and it must be salted uniquely for that credential set, and must use a different salt from the backend hash. Plenty of ways to mess this up, and that's before we're even consider the implementation details of salting.
Then comes the interactions between likely different hashes. All in all, it's pretty complicated.
> I agree with you on the general pointlessness of client hashing, though, and oh what a world if we had pub-priv keys for authentication instead of passwords...
Especially with the only sensible procedure being password managers, it's silly we don't have asymmetric authentication. :(
I'd like to highlight that "passively listening" was written intentionally. As soon as an attacker becomes active, he's at an incomparably higher risk of being detected.
If TLS was broken, I would always be able to get your password by just removing the hashing from the login page. I'd also be able to modify all legitimate requests, and be able to obtain your genuine login tokens for any site.
That requires an active attack though. Yes, that’s always possible if TLS is broken, but it takes more work and is detectable. (Not saying it’s easy to detect, but it’s detectable in theory, where passive sniffing is not.)
If TLS is broken, you lose all message authentication, rendering tampering entirely undetectable.
Passive sniffing requires the same attack vector, and the idea of an attacker only utilizing passive attacks makes no sense.
The point is that additional hashing designed for the premise of "broken TLS" fails the thread model test: It is pointless, as the only time it is applicable is when everything is fallen apart.
If your machine has TLS being intercepted nothing matters anymore. That would be the equivalent of having a keylogger installed. There's no point in trying to protect such a client anymore, it's game over then.
Passive interception happens. Especially when you're using a cipher that isn't forward-secure; a breach of the server could allow it to decrypt previous sessions.
I think the idea is to hash on client _and_ the server. The client 1-way hash (~~unlike the server's 2-way~~) is the new "password" now and it's sole purpose is so that you don't get to see the raw password on the server.
indeed, scratch that. i rewrote that comment a couple of times, and that 2-way part is of course not correct. this would defeat the point of hashing passwords if they were reversible
notwithstanding all the other caveats mentioned here, wouldn't hashing on the client side make it possible to salt the hash so that different sites generate a different hash, thus making it unlikely that the hash can be reused even if the actual password is the same? the salt could even include a time component making the hash expire after a time.
this obviously does not eliminate the need for other security measures, so it's possibly more a question of "is it worth it?"
Wouldn't hashing on the client side possibly introduce a reduction in complexity of the password? The hashed password on the client side could be used in place of the plaintext password if the channel is insecure. Lets say the password length limit is 256 ascii chars this is 8*256 bits (ok a little less since not all ascii chars are printable), but if hashed on the client (lets be generous and hash to 1024 bits) it's still half the complexity of the plaintext.
A 128 bit random plaintext is more than secure enough...
If you're trying to argue that the user puts in, say, 60 bits of entropy, and that the hashing algorithm is going to accidentally throw out 10 of those to result in 50 bits of entropy, I believe that any hashing algorithm that did that in a way that anyone can exploit with realistic computing power (a computer smaller than the size of the solar system) would be considered irredeemably broken.
If you are not including humans your maths might be right. With humans not all 8 bits of extended ASCII are realistically used in passwords. And rarely you get passwords longer than 10 characters.
And then: By the password not leaving the system you avoid another issue with humans: If the hash is lost (accidental logging and locks leaking and such things happen) this won't allow to sign in to other accounts, where the lazy human used the same password.
For machine-to-machine communication tokens with higher entropy can and should be used.
It's less about bits and more about entropy. Passwords tend to be low on entropy because of words and patterns present in the sequence. Its almost always going to be harder to break passwords by trying every possible hash than doing dictionary attacks against plaintext passwords. 256 bits is around 37 characters of base64 text. Compare that to most passwords and it's safe to say they aren't losing entropy when hashed.
Isn't the main security feature of hashing that you hash against a salt and that no one but the server knows the salt? Once you send the salt to the client (and anything on the client should be considered insecure) you give the ability to generate lookup tables for common passwords. Without the salt it's much harder to brute-force the password.
No. Salts are not intended to be secrets. The expectation is that in the case of a breach that salts are also exposed. What they do is prevent precomputation of lookup tables, granting the developer a bit of time after a breach before all bets are off.
No, the salt can be public (it was on Unix machines before the invention of /etc/shadow). The important thing is that it is unique per password, so that Hash(Salt#Password) is unique even if two passwords happen to be the same.
1. If the client is written by the same people as the server, then it provides no improvement in password confidentiality.
2. It provides no additional security in the authentication process.
3. It provides no additional security-at-rest for post-breach protection.
Doing something like that ends up in the dangerous bucket of duct-tape crypto. I don't mind people playing with crypto, but things are not as simple as they seem, and "more" is not necessarily better.
The client in this case should be the web-browser; it shouldn't be custom Javascript. There should be an attribute on the password field that says if the password should be hashed before sent to the server. It could also be salted by the browser to reduce reuse across sites similar to any password manager today.
> Hashing in the client leads to a fair share of security issues, especially if it's not also hashed on the backend using the usual salted hashes.
Sure, if you're foolish enough to not hash on the server side then you're setting yourself up for failure. But I fail to see what the problem with hashing on the client side is?
It doesn’t hurt much, but it doesn’t help much either, especially for web applications. If the user cannot trust that the server doesn’t do bad things with the password, they also cannot trust the javascript hash function sent from the same server, so it does not protect against the scenario discussed here. And it is no substitute for an encrypted connection.
Exactly. Using a password manager is probably the most secure thing that can be done these days given your password for the password manager is secure (which it most likely will be due to the stringent complexity requirements for some of these services)
I think you meant "don't get"
Hashing in the client leads to a fair share of security issues, especially if it's not also hashed on the backend using the usual salted hashes.
There are protocols like SRP to do it securely, but it's non-trivial. And remember that such protocol is only useful if you trust the client implementation—it's kind of pointless for a third-party webpage or app.
Use randomized per-site password. Solves everything.