Wow, it's quite disheartening to read some of the comments here. Let's try something shall we:
- open up private browsing
- press F12 (or however you get the developer console on a mac) and go to the networking tab
- go to gmail.com say
- enter your gmail credentials
- look at the post request generated, and at the request tab, it will contain your password in plain text
So passwords don't get hashed on transit, this is why having HTTPS is so crucial, which is to prevent someone in the middle (say when you connect to an open Starbucks wifi) from sniffing out your unencrypted password. The password on the server side initially can be unencrypted before it gets hashed to be stored into the database. So in this instance, the password in the database is hashed, but there is a small period where the password is plain text in memory.
For a site called hacker news, it's really sad how little people here know about hacking.
I had a junior QA discover this while working at a Fortune 50... everything ground to a halt for two days until everyone (team of 20) was assured during a handful of meetings that this is how browsers work and why we use HTTPS.
I appreciate they where trying to protect passwords & promote security, but it definitely caught me off guard that this wasn't widely understood.
Hashing in the client leads to a fair share of security issues, especially if it's not also hashed on the backend using the usual salted hashes.
There are protocols like SRP to do it securely, but it's non-trivial. And remember that such protocol is only useful if you trust the client implementation—it's kind of pointless for a third-party webpage or app.
Use randomized per-site password. Solves everything.
Hashing twice, on the client as well as on the server, won't introduce any security issues if the choice of algorithm is good, and it does further protect the users password from being harvested from passive MITM'd SSL like it is on some corporate networks. (Yes I know MITM means they could rewrite the client to steal passwords, but that's work) If I was designing a password auth scheme that highly prioritized user security, I would hash both client and server side.
>and it does further protect the users password from being harvested from passive MITM'd SSL like it is on some corporate networks.
It might protect the password if the user is reusing it elsewhere, but it doesn't protect the account the password is securing during the intercepted transmission.
You can't enforce complexity rules or check against known leaked passwords or that they don't reuse username as password or anything like that if you hash on the client too
Primarily to get around arbitrary password rules that do not enhance the security of the password but serve to weaken it, e.g. only use special characters from this list: !@#, or sorry your password is TOO LONG (?!)
That's true, but only to a point. You can actually server-side check username/password equality, and a not overly long list of other unwanted passwords. You just have to check each one.
If you hash in the client, attackers with dumped hashed credentials (from elsewhere, which might only hash plaintext) don't need to try to reverse/"crack" the hashes, they can just use them as-is. With users reusing passwords across sites, that would be pretty terrible.
That is why he said, we should do both: hashing client AND server side. Hashing on the client prevents sending plain text passwords to the server which could mishandle them eg. Logging them for instance like Twitter once did. Client side you can also check on registration if the user password is known to have been pwned (1password offers an api for that) and use a secure hashing algorithm. Server side the hash (which is then the password) will be salted and hashed again.
That makes the hash of the user's password equivalent to a plain text password. And that means all those leaked databases, which previously only contained hashes that needed to be bruteforced to be reversed, suddenly function as a source of plaintexts.
I think (s)he is saying to hash client side, salt, then hash again server side. So if the database gets compromised, the attacker still has to apply the salt to brute force the hash.
I can't see a reason why this hurts, but it's probably not worth it. The only benefit I can see it having is if the user reuses their passwords on other sites, as now a potential malicious MITM would have to brute force the client side hashed password in order to reuse it on other sites. They would still be able to use that client side hashed PW to authenticate to the service in question, though.
Yeah, thats exactly my point. So the MITM who got the PW after client side hash, but before server side salt and hash would have to brute force that hash in order to use it on those other sites. That's the only benefit I see, and a very specific set of circumstances.
The first sentence is true (and I can't believe how far down this thread it first appeared), but nobody was advocating abolishing server side hashing as well.
Basically, hashing on the client does absolutely nothing for the security of the transfer or storage of the password. Whatever the client passes to the server, whether it's hashed or not, is effectively their password, and intercepting that (again, whether hashed or not) would give an attacker the actual credentials of the user.
Oh, I see. Other websites give you the 'password' to this website.
But only if neither site uses salt, so salt all your hashes. (Not even a salt is necessary here, just a site-wide addition would be fine. Or honestly you could just use a hash that wouldn't be used by a site too dumb to salt their database.)
No - As mentioned in a comment elsewhere, these kinds of experiments fall dangerously into a category of duct-tape crypto.
If you want secure password transfer, do not construct a ghetto solution with stacked hashes, but use a proper protocol like SRP (which, again, is pointless unless the client is trusted).
Especially in the case of Twitter, doing this provides absolutely nothing in respect to credential confidentiality as both sites of the transaction is written by Twitter.
I don't get it. Assuming that a passivly listening attacker has broken TLS, additionally hashing at the client is helpful. The service at hand is now (most likely) insecure, but at least other services do not suffer from the plaintext password being exposed. It's an advantage, albeit small, but it's easy to do with virtually no risk of error.
Stacking hashes is not cryptographically sound. The additional hashing weakens the credentials. Cryptographic hashing is a very fragile subject.
"Assuming an attacker has broken TLS" - this makes no sense. If I had broken TLS, I could send you a login form with no hashing. I could modify your legitimate API requests, without the need for your credentials. I could take your token and forge requests as I wanted. All your assumptions go down the drain, as the security relied on the presence of TLS.
Basically, making this setup is generally harmful, while only providing negligible benefit in a doomsday scenario where all bets are off regardless.
If you truly want to have better security, you need an entirely different system. E.g., local private keys used to sign requests, or a proper password exchange/validation protocol. Smacking another hash on top does nothing good, and the idea is a good example of why normal people shouldn't crypto.
I've heard the "additional hashing weakens the credentials" line since I started programming, but no one ever bothers to link a citation, or if just reasoning simply from e.g. the pigeonhole principle seems to realize the increased likelihood of collisions is negligible for typical hash choices. It also flies in the face of common in-practice schemes (pre-bcrypt) like doing n rounds of sha256. There should be no real problem with doing the first round on the client.
I agree with you on the general pointlessness of client hashing, though, and oh what a world if we had pub-priv keys for authentication instead of passwords...
> I've heard the "additional hashing weakens the credentials" line since I started programming, but no one ever bothers to link a citation
The main issue is that security analysis is non-trivial due to the interaction of their security guarantees. The end-result is effectively a new hashing algorithm. Figuring out the properties of this new algorithm requires a new cryptoanalysis (which is more than just avalanche tests).
> It also flies in the face of common in-practice schemes (pre-bcrypt) like doing n rounds of sha256. There should be no real problem with doing the first round on the client.
Old practices are deprecated for good reason, primarily due to flaws. Looking at what we used to do it not that useful.
The main problem is that it's not just "n rounds". The hash on the client must be salted, and it must be salted uniquely for that credential set, and must use a different salt from the backend hash. Plenty of ways to mess this up, and that's before we're even consider the implementation details of salting.
Then comes the interactions between likely different hashes. All in all, it's pretty complicated.
> I agree with you on the general pointlessness of client hashing, though, and oh what a world if we had pub-priv keys for authentication instead of passwords...
Especially with the only sensible procedure being password managers, it's silly we don't have asymmetric authentication. :(
I'd like to highlight that "passively listening" was written intentionally. As soon as an attacker becomes active, he's at an incomparably higher risk of being detected.
If TLS was broken, I would always be able to get your password by just removing the hashing from the login page. I'd also be able to modify all legitimate requests, and be able to obtain your genuine login tokens for any site.
That requires an active attack though. Yes, that’s always possible if TLS is broken, but it takes more work and is detectable. (Not saying it’s easy to detect, but it’s detectable in theory, where passive sniffing is not.)
If TLS is broken, you lose all message authentication, rendering tampering entirely undetectable.
Passive sniffing requires the same attack vector, and the idea of an attacker only utilizing passive attacks makes no sense.
The point is that additional hashing designed for the premise of "broken TLS" fails the thread model test: It is pointless, as the only time it is applicable is when everything is fallen apart.
If your machine has TLS being intercepted nothing matters anymore. That would be the equivalent of having a keylogger installed. There's no point in trying to protect such a client anymore, it's game over then.
Passive interception happens. Especially when you're using a cipher that isn't forward-secure; a breach of the server could allow it to decrypt previous sessions.
I think the idea is to hash on client _and_ the server. The client 1-way hash (~~unlike the server's 2-way~~) is the new "password" now and it's sole purpose is so that you don't get to see the raw password on the server.
indeed, scratch that. i rewrote that comment a couple of times, and that 2-way part is of course not correct. this would defeat the point of hashing passwords if they were reversible
notwithstanding all the other caveats mentioned here, wouldn't hashing on the client side make it possible to salt the hash so that different sites generate a different hash, thus making it unlikely that the hash can be reused even if the actual password is the same? the salt could even include a time component making the hash expire after a time.
this obviously does not eliminate the need for other security measures, so it's possibly more a question of "is it worth it?"
Wouldn't hashing on the client side possibly introduce a reduction in complexity of the password? The hashed password on the client side could be used in place of the plaintext password if the channel is insecure. Lets say the password length limit is 256 ascii chars this is 8*256 bits (ok a little less since not all ascii chars are printable), but if hashed on the client (lets be generous and hash to 1024 bits) it's still half the complexity of the plaintext.
A 128 bit random plaintext is more than secure enough...
If you're trying to argue that the user puts in, say, 60 bits of entropy, and that the hashing algorithm is going to accidentally throw out 10 of those to result in 50 bits of entropy, I believe that any hashing algorithm that did that in a way that anyone can exploit with realistic computing power (a computer smaller than the size of the solar system) would be considered irredeemably broken.
If you are not including humans your maths might be right. With humans not all 8 bits of extended ASCII are realistically used in passwords. And rarely you get passwords longer than 10 characters.
And then: By the password not leaving the system you avoid another issue with humans: If the hash is lost (accidental logging and locks leaking and such things happen) this won't allow to sign in to other accounts, where the lazy human used the same password.
For machine-to-machine communication tokens with higher entropy can and should be used.
It's less about bits and more about entropy. Passwords tend to be low on entropy because of words and patterns present in the sequence. Its almost always going to be harder to break passwords by trying every possible hash than doing dictionary attacks against plaintext passwords. 256 bits is around 37 characters of base64 text. Compare that to most passwords and it's safe to say they aren't losing entropy when hashed.
Isn't the main security feature of hashing that you hash against a salt and that no one but the server knows the salt? Once you send the salt to the client (and anything on the client should be considered insecure) you give the ability to generate lookup tables for common passwords. Without the salt it's much harder to brute-force the password.
No. Salts are not intended to be secrets. The expectation is that in the case of a breach that salts are also exposed. What they do is prevent precomputation of lookup tables, granting the developer a bit of time after a breach before all bets are off.
No, the salt can be public (it was on Unix machines before the invention of /etc/shadow). The important thing is that it is unique per password, so that Hash(Salt#Password) is unique even if two passwords happen to be the same.
1. If the client is written by the same people as the server, then it provides no improvement in password confidentiality.
2. It provides no additional security in the authentication process.
3. It provides no additional security-at-rest for post-breach protection.
Doing something like that ends up in the dangerous bucket of duct-tape crypto. I don't mind people playing with crypto, but things are not as simple as they seem, and "more" is not necessarily better.
The client in this case should be the web-browser; it shouldn't be custom Javascript. There should be an attribute on the password field that says if the password should be hashed before sent to the server. It could also be salted by the browser to reduce reuse across sites similar to any password manager today.
> Hashing in the client leads to a fair share of security issues, especially if it's not also hashed on the backend using the usual salted hashes.
Sure, if you're foolish enough to not hash on the server side then you're setting yourself up for failure. But I fail to see what the problem with hashing on the client side is?
It doesn’t hurt much, but it doesn’t help much either, especially for web applications. If the user cannot trust that the server doesn’t do bad things with the password, they also cannot trust the javascript hash function sent from the same server, so it does not protect against the scenario discussed here. And it is no substitute for an encrypted connection.
Exactly. Using a password manager is probably the most secure thing that can be done these days given your password for the password manager is secure (which it most likely will be due to the stringent complexity requirements for some of these services)
Of course it’s OK the authentication system can read the password! What is so wrong on both technical and ethical levels is that the password entered into some data pipeline for machine learning. God knows what else they do with the cleartext passwords.
Hashing the password on the client limits the password entropy to entropy of hash you generate. Any addition entropy in the original password is thrown away.
Also naive hashing in the client just turns the hash into your password, all of the standard issues of transmitting passwords still exists (such as replay attacks as you mention).
Hashes do reduce it, by an amount determined by the hash size and algorithm.
This has nothing to do with client vs server. SHA1 of STRING will always have the same entropy wherever it is computed; it must, for the hashing to work.
EDIT: I suspect you are confusing PRNGs or KDFs with hashing. Entropy is relevant with the former, not the latter.
> To be useful the hash should turn your password into a OTP
I'm not clear why you think this would be so. Hashes (with salts) prevent attackers from being able to derive the password, and can also be used in conjunction with nonces to prevent replay attacks.
You need both parties to know the nonce and the salt for this to work, so you’ll need to send it over. At this point what value are you gaining over just sending the password?
>You need both parties to know the nonce and the salt for this to work, so you’ll need to send it over.
I get the impression you don't understand what the nonce and salt are, so I'll break it down.
The salt is used to make the hashes for particular values unique to this site. It generally does not change across the site, but it makes it impractical to use pre-computed hash tables to find the original values for a given hash. By using a salt, the site ensures that transmitted hashes cannot be trivially reversed.
The nonce is used to allow the use of an HMAC to create a one-time hash that cannot be reused. This would typically be implemented as follows:
1. Service sends NONCE, SALT
2. Client Sends password as SHA256(SHA256(ClientPasswd + SALT) + NONCE)
3. Service Computes SHA256(StoredSaltedHash + NONCE), confirms that it matches what the user sent
4. If the two match, the user is authed.
An attacker cannot reuse what the client sent because the nonce is not reused, and he cannot fake the nonced hash because he does not know either the password or the StoredSaltedHash.
Firstly, the salt should be changed per user, one of the reasons for salting is that given a large enough user base, a lot of users will choose the same password, and that can make the hash way less effective in obscuring the passwords if/when leaked.
There's essentially rainbow tables computed for "123456"+RandomSalt iterating over salts. There are a bunch of them covering the most frequent passwords. Since you are using the same salt for the whole site, two users with the same password have the same StoredSaltedHash, so you order by number of repetitions, and the most frequent is likely one of those ("123456" tended to be the most popular the last time I checked). This exposes your salt and then it's game over.
What I'm trying to get across (and failing apparently) is that this is still vulnerable to lots of attacks, which means that you'll have to use HTTPS underneath all of this. Once you are doing all of this over a HTTPS connection, there's simply no point in doing it, you are just increasing the attack surface for no substantial gain in security.
>Firstly, the salt should be changed per user, one of the reasons for salting is that given a large enough user base, a lot of users will choose the same password,
This is only an issue if the hash can be reversed. The entire raison d'etre for salts is to prevent hash reversal via precomputed tables.
>There's essentially rainbow tables computed for "123456"+RandomSalt iterating over salts
Rainbow tables are already enormous-- in the hundreds of GB range for 9+ character passwords. Iterating over the range of possible salts makes the storage requirements untenable.
As long as you choose an unlikely or unique salt which could simply be your site's FQDN or its MD5 hash, this isn't an issue. No one is going to have a rainbow table for md5(yourFQDN.com) as salt.
>which means that you'll have to use HTTPS underneath all of this
You should but it is not necessary for secure password transmission. This is a solved problem for decades now.
>Once you are doing all of this over a HTTPS connection, there's simply no point in doing it, you are just increasing the attack surface for no substantial gain in security.
Layered security is a thing. Root CAs can be compromised, trusted CA stores can be tampered with, SSL busters like CRIME or BEAST can show up on the scene, someone can gain network access behind the SSL termination.... Layered security is why when LastPass got popped nothing of value was lost, because they used legit KDFs and salts.
There's no increase in attack surface for using an HMAC behind SSL. If SSL is your only line of defense, your security model is stuck in the 2000s.
Yeah, but then the server must know the plaintext password to check the client reply, which is exactly the problem password hashing schemes were designed to solve.
That issue is solved by simply having "hash(password+salt)" the thing the server has stored and is checking.
In other words you can combine salts + hashes + nonces to both eliminate the server's need to know the password and the possibility of a replay attack.
Correct. The server should be storing the password hashed as Hash(Password+Salt) to make it impractical for an attacker to recover the user's password.
Recovering whatever the server has stored is typically going to be sufficient to authenticate the user, but storing it as a hash mitigates some threats.
Yup. I vaguely recall having to include something like this for a site I built years ago. I didn't code the filter myself, but I remember the feature request that usernames and passwords be filtered based on a profanity list. Users weren't blocked, they would just get "you can't do that" response. The passwords were inspected in memory as part of the request process after HTTPS decryption and before running through the digest function and sent to the DB.
As an alternative to this checkout TozID. The premise of their authentication model is to avoid sending the password all together and use public key crypto to sign and verify requests between the client and the auth server.
Note that such system only provides you benefit if the client-implementation is to be trusted.
E.g., if your user-agent does it all for you, you could consider it trusted, but if all the code is provided by the "untrusted" service-provider that you won't want to see your password, it ends up just being for show.
Similar situation with ProtonMail: As long as you use the clients shipped by them (webmail, app), all of the security hinges on nothing but a promise. Their app can read the passwords and keys as much as it wants.
I also found out that protonmail doesn't seem to send the password in plain text in the post request itself. Really keen to see how they do it as well.
Interesting, but did you really have to be condescending? On top of this, not everyone here is a hacker and not every hacker is a computer security expert.
The Axios journalist who did this, Bethany Allen-Ebrahimian, is a huge thorn in the CCP's side - one of the most outspoken, widely read, and retweeted media critics of China's domestic policies and international activities.
Allen-Ebrahimian has focused on the crackdown against peaceful pro-democracy protestors in Hong Kong and the dismantling of "one country, two systems" (1), genomic surveillance of ethnic minorities in Xinjiang (2), and Huawei (3), among other topics. Last week, a CCP mouthpiece publication labelled her an "anti-China journalist" for her work (4).
She also uses WeChat for research (5).
I believe her WeChat account was very closely monitored, more than the average Western user.
> I believe her WeChat account was very closely monitored, more than the average Western user.
Be that as it may, it doesn't explain why WeChat appears to be storing or transmitting plaintext passwords. That's incredibly alarming, if true. As are the implications of such a design.
They can be transmitted encrypted over https and still readable by the server. It doesn’t make a ton of difference if you encrypt client side, whatever the client sends is “the password”, encrypted or not.
Lots of people in this thread are trying to come up with theoretical ways this could be relatively benign.
Look at it from the state's perspective: You have nearly unlimited control over your population, but you still can't read their minds. Given how common password reuse is, why wouldn't you track everyone's passwords in case you want access to data stored on systems that aren't already integrated into your sweeping surveillance system?
In the best of systems, client-side encryption/hashing occurs.
In the mediocre, they are sent to a server-side app and hashed without being analyzed, and stored in their hashed/salted state.
No good service analyzes your password server side, much less for the offensive nature of your password.
edit: Okay, I suppose there can be some analysis of password strength server-side before hashing at rest. Still, it should not be analyzed for the social acceptance of its content.
I've used systems in the past that analyze them server side for similitude with previous passwords of your own (or perhaps only your last password? if that's the case, requiring current password would be enough, no need to store it in plain text).
They might also want to check it against a list of most used passwords.
I vaguely remember Microsoft doing this, e.g. "You cannot use your old password and just add a number to it", but I might be mistaken and it may have been only blocking setting the password to a previous one.
All they'd have to do to safely achieve that is, when you initially set your password to "foo" they will store 11 hashes (foo, foo0, foo1, foo2...). Then when you change your password, it's hash cannot equal any previous hashes.
Or, even simpler: if the plaintext of the new password ends in a number, try stripping (or decrementing!) that number and see if either of those hash to the same as the stored old passwword.
- The actual password being known to an attacker who can read everything but the client-side state.
- The password has a high chance of being used at other sites as well, so preventing attackers from knowing the password on this site, also prevents them from using it to attack other sites.
But:
- It does not protect against determination to get an authorization token, which is the client-side hashed password that the server sees.
- It does not protect against an attacker who can modify the code which the client receives and runs.
So instead of the attackers/admins seeing your password, they see a string of characters that, when delivered to the server along with your account id, grants access to your account. Hm.
If the password 'Superman' is stored as text in this configuration file I need to fix for a user, there's a good chance I inadvertently learn their password is "Superman" and I may remembers this for hours or days even though I did not set out to learn it. Maybe I half-forget and think it was Spiderman or something, and this wouldn't happen if it was 16 alphanumerics chosen at random, but it isn't.
Whereas suppose the file obfuscates that password with Base64 and stores U3VwZXJtYW4= my brain doesn't see that and realise "Huh the user's password is Superman" or even "The user's password is U3VwZXJtYW4=" it just goes "Some gibberish not important to the current problem" and an hour later I couldn't tell you what it was.
There are a bunch of things we do not because they stop malevolent people from doing evil, but because they avoid tempting good people to be naughty.
The best systems aren't vulnerable to that, since it's a 2-way handshake using a Password Authenticated Key Exchange (PAKE)[1] like SRP (Secure Remote Password) or OPAQUE[2].
Doesn't mitigate the fact that this type of scheme turns the hashed password into the equivalent of THE cleartext password. It's usually compromised/dumped server databases you want to protect against.
No, the outlined scheme is actually correct. If there’s a call/response aspect, then the hashed nonce+password is not “the cleartext password”, it’s “the cleartext password for that particular challenge”, which makes it useless for other auth attempts, which would have different challenges.
The reason passwords are hashed is to prevent them from being usable to people who gain access to the backend database.
If the hash in the backend is the only thing you need to calculate the salt challenge, then it is the equivalent of a plaintext password. Someone with access to dumps of a compromised backend database can use the hash directly.
I think we’re making different assumptions about how the challenge/response would be implemented. I agree that if there’s a shared secret and the nonce is simply appended to it and hashed, it’s a poor design.
I believe they are doing something more sinister than storing a plaintext password. Why would they even check the contents of it?
That the banning came with a short delay makes me guess that they scan all input to their app, indescriminately. They just send all keystrokes to their censorship server and check them there, and apply penalties if neccessary.
The reason they would do this is because WeChat is actually many apps in one. Third parties can embed mini apps as HTML pages in their profiles. For example, when in China I used a WeChat applet to control a coffee vending machine. In order to monitor content in every app, it makes sense to just blanket scan the inputs. Someone brave could test this by typing offensive things into some text field but not submitting them.
Pretty unlikely. That would be the weirdest shit ever. This reeks of plain-textifying a password, probably running through https, but rather either logged in plain text when the auth code is running or, and this is even more likely, they are storing the passwords in plain text. That's 10000 times more likely than checking a blacklist or running a banned password regex list. It definitely is likely there is a regex for password constraints but blocking specific words is extremely unlikely because it would probably compromise the security of the regex itself.
edit: I'll take back most of my comment here because it was admittedly hard to read and slightly out of context.
I'll un-shorthand my comment since it wasn't understood too well, sorry about that.
If it was password-strength validation, then it would have been blocked immediately during signup, with an error message that would have prevented the sign up flow altogether. The alternative flow that I'm calling the "weirdest shit ever" would be letting the user sign up without a password error which would mean they are logging the password in plain text for manual review later. You're both not actually making the password rules set during the registration process, but you're also manually logging the password so someone can see it in plain text. That's why it's the "Weirdest shit ever" especially if this code was written by a security person.
If I suddenly want to add words like "fuck" or "f*ck" to the regex as banned words, the complexity of that already complex regex goes waaaay up, and it compromises its validity/stability. If you worked on regexes in the past that have multiple alternative flows, it's very easy to fuck it up and make it accept anything. It's not a good idea to change a regex to have stop words because it has a high chance of breaking the regex itself.
Again however, it's highly unlikely the password was blocked in the password regex step because it allowed signup to continue.
> And you're saying that they'd prefer to store the passwords as plain-text out of concern for the "security of the regex"?
No I wasn't saying that they prefer to store the passwords in plain text. I was saying that it's 10000 times more likely that the password is stored in plain text IFF (if and only if) the reason for the ban was because of the password. How would they ban the user if they didn't see the password, because of either a logged file or simply stored plain text in the database. And the reason it's likely to be plain-text database is because it's probably the number one first approach to store a username password combination. Even though WeChat is a huge company it honestly wouldn't surprise me if they are not using a salt and hash or bcrypt directly at this point. Yes I get that most people you know would use bcrypt or a salt/hash combo, but when you open to the world of all developers, specifically ones in China, what do you think the default user table is going to look like?
Plenty of very secure identity providers allow for a list of banned words. Typically you populate the list with things like the name of the app/website/business, the user's name, etc. (rather than vulgarity/dissent as seen here). There is nothing insecure about doing a dictionary comparison right before hashing for storage or regexing for strength constraints.
And these banned words somehow do not stop registration immediately? And these VERY secure identity providers then later show the admin what password was attempted? I would say that's extremely insecure if not a huge privacy violation for something to be considered "very secure", and would definitely not have any form of sign-off from anyone that keeps up to date with OWASP security practices.
Does it need to be plaintext though? Just a theoretical possibility: They can maintain hashes of potentially offensive passwords. You then need to just compare hashes. This is somewhat like registering all domains which can harm your reputation yourself, before anyone else does.
I am not suggesting that this is what WeChat does, but I think it is possible to have a blacklist without plaintext transmission.
> They can maintain hashes of potentially offensive passwords
Assuming they want to check against all casing combinations, the list of hashes would be 3 orders of magnitude bigger for a list of 7 letter words. If they want to check for substrings or spelling variations, list would combinatorially explode.
At WalMart Stores, Inc., was opening many stores in China in the late 90s and early 2000s, I was on the 'Network Management' team. Think: 'devops' but for an enormous global network.
At the time, (most) every store in the world had a 56k frame relay network connection back to the Bentonville, Arkansas home office. The main purpose of this connection was to do various credit/debit/EBT,check/etc authorizations.
Stores in China had something additional: a fractional 56k frame link, the far end terminated by some other entity.
Normally, in store point of sale systems sent authorizations to the then named VISA system in Bentonville. (It was called VISA but it handled most electronic transaction types. It was replaced by a far more robust and generalized system called E-Pay shortly thereafter.)
In China, the POS systems also sent the transactions across that other link.
We didn't know officially who was on the other side, but it was widely speculated that it was the Chinese government.
My knowledge of these things is nearly 20 years old now, do take my recollections with a grain of salt. Also, I have no idea how this setup has subsequently evolved.
Others in the Twitter thread have claimed to use the same password without any effect. Would like to see some replication before jumping to too many conclusions.
It looks like this has been blowing up on twitter for 20 hours. WeChat is perfectly capable of turning of having turned off this feature by now, and letting people think she was wrong. Or even of then having people on their payroll pretend it never worked like this to make sure people test after they've changed the rules.
A few k stars on Twitter is hardly blowing up. More plausible explanation: journalist was using WeChat in “creative” ways looking for a story (even admitted to “probing”, though didn’t mention anything else she probed), got account closed after a while, attributed it to the last action, although that was not actually the cause (could be a combination of various activities).
I'm sure her WeChat contacts are now in for some close scrutiny as well. I hope they were aware what she was doing, and none of them have anything to hide from the Government.
Or her account was flagged for any number of reasons when a user changes a password on any service.
I've seen worse user experiences.
But yeah lets just go with the passwords are in plaintext and she's a journalist beijing doesn't like as the only possible outcome. Thats the explanation I'm going to go with when Facebook security reviews blindside my new accounts and close it unilaterally.
>This only proves they aren't encrypting passwords on the fly. And have, and do, the ability to read your password.
Not really, even if you have password hashing, the password is always in the clear when you're attempting an login or you're setting it. They can simply run the detection system then, without loss of security.
> The SRP JavaScript demo has been tested successfully under the following browser environments:
>
> Mozilla 1.x, Netscape 6, 7
> Netscape 4.x - JDK 1.1.5 w/bignum support required
> Internet Explorer WindowsNT 4.0 (IE4)
I have a feeling that the page referenced is not so "modern" :-)
BUT, that doesn't make it a necessarily bad protocol for modern browsers to be implementing, i.e. as part of their "identity management" stuff, like Firefox Sync [1].
Remember that the Stanford Java applet implementation was purely academic, intended as a demonstration and developed at a time where JavaScript was too limited for the large number arithmetic they wanted to do (no BigInteger equivalent).
That purely academic implementation does not invalidate the work itself, or future implementations thereof.
SRP is good. Apple uses it for HomeKit. The HomeKit pins you use are authenticated by SRP. The page is just a really old demo, the protocol itself is still sound and some have come up with variations for more security.
Also it has the ability to generate a session key.
The good news is that using SRP is definitely not worse than DH as a key agreement protocol.
The bad news is you probably already have a nice modern ECDH key agreement protocol, you wanted secure passwords and the proof for how SRP does that involves a lot of flailing about. Flailing about which so far reached SRP version 6a
If browsers and backend stacks and everything else was one config change from doing SRP 6a tomorrow it's tempting to say hey, go ahead it can't hurt.
But in fact SRP is very niche, so it makes at least as much sense to try to deploy OPAQUE or other things that have a clearer security rationale.
It's an educational demo for students to learn about the protocol ya nincompoop. The actual protocol is language independent and only really requires that the client do I teeny bit of lifting.
On any client where you can run code you can authenticate without ever sending your plaintext password over the wire. And on clients where you can't run code you can still accept the plaintext password as a fallback.
Dude what? I don't think anybody uses that. I've never heard of it before. Just because somebody created a thing doesn't mean it's safe or secure or that we should all switch to it before it gets some solid testing and validation under its belt.
I don't see any plugins or libraries to add to any of the huge number of exiting authentication libraries. Has it even been subject to a hostile security audit? Has it been put in a place where it will face a serious state-sponsored attack and performed at least as well as existing systems?
I've written many webapps in my life time with browser and server-side validation. Encrypting is the very first step once the form is submitted. We never knew the password the only thing the library could do is validate it for minimum requirements and compare to stored hashes, that's it. If your password was "CannibalisticBabyRaper42!" as far as the server is concerned it met all the requirements.
> Encrypting is the very first step once the form is submitted
I've never worked on a webapp that did this, though I have heard of it. What's the point, really, if you're using SSL? If you don't trust the server then don't use the service
Server side, the more you pass around a clear text password the more likely you are to have a security issue. So it’s very good practice to immediately hash any submitted passwords to avoid them ending up in logs accidentally etc.
It’s pointless client side. Trusting the clients hashing of a password is the same as using a clear text password. Aka some admin or hacker learns the hash they can use a modified client to send the hash directly.
It's only pointless to hash client side if the server does not also hash.
Hashing on the client what the user actually types and sending that hash to the server which then treats that hash as the password can help in the case of users that construct similar passwords for different sites.
A user who uses passwords of the form "<site>.19%Gkm19^GB", for example, would normal be sending "gmail.19%Gkm19^GB" gmail, "twitter.19%Gkm19^GB" to Twitter, and so on. If anyone one of those gets compromised there is a good chance they all will be.
If the sites had a first step of client siding hashing then the password that gmail sees is "0d156132c43e7110b5d678eafd7117b5a4649fbf" and the one twitter sees is "cfd9d7f62b92eb025a242bc860d41aae60ebeacf". One of those getting compromised on the wire or at the server doesn't put the other at risk.
If you accept that someone could compromise the web server but be unable to modify the website sent then sure. More generally if you don’t trust the other end you should not be using passwords.
If you're using SSL the password is encrypted in transit. Once it reaches the server it needs to be to encrypted at rest. When I see billion dollar companies with an ungodly IT budget not do something basic like salt and hash passwords in the database I have to wonder how they could miss something so basic.
Fully agree that a password should not be manipulated beyond what is absolutely necessary and this is usually just hashing it and getting rid of the cleartext asap.
However, it is difficult to draw any conclusions from a tweet.
The author is a journalist covering China so she might already had performed various 'tests' on her WeChat account that raised various red flags for various reasons until her account ended up being flagged for deletion.
If they are really checking cleartext passwords against 'offensive' keywords they are impressively thorough, tbh, but it would still seem odd to delete an account just like that instead of rejecting the password so I'm thinking that there is more to her story, if it is true.
I’ve been thinking that it could be a great safety feature. If every site had its own mapping then uses a hash it would eliminate simple password matching. If they get access to the database, you could find the password by taking common paswords hash them and do a match. This would not be impossible with the speed of databases. If every site did that it could prevent this line of attack. I can’t really see a bad thing about.
> This only proves they aren't encrypting passwords on the fly. And have, and do, the ability to read your password.
Not at all. They could simply be checking against rules before hashing the password. Pretty much any passworded system already does this in order to enforce minimum length rules.
Password length rules are enforced at form submission time, before the account is opened - whereas the twitter post says the account was opened, then permaclosed.
If that's their password length checker, they've got the maddest password length check design I've ever heard of.
Maybe they just have simple hashes (unsalted) of their disallowed passwords and flag hits to them. They compute all the password combos that trash the CCP (and whatever else) and autoban, I guess...
I doubt people were using "Fuck", rather something like "FuckMyEx" or whatever, which has a completely different hash. Also, passwords should be hashed with salt.
In which case they aren't even salting the password, which means that hashing the password isn't doing a whole heap. It isn't plaintext, but it isn't that far from it.
> In the story, they gave him no warning that the password was prohibited, or why, and then permanently deleted the account with no recourse.
/She/ is also a well known journalist who could easily have had her account being monitored. Other people on that thread claimed that they changed their password to the same and nothing happened.
In this case it seems like a form of thought control. Even if no one else knows your unpatriotic password you can't use it because you shouldn't be thinking those thoughts.
I read the top 3 tweets there. She doesn't say how, within 45s, she was informed of the account closure?
If she just couldn't login, for example then it could simply be she mistyped the password.
The narrative of how she just suddenly decided to check if writing FuckCCP89 in the password field would cause any effect seems distinctly unlikely. If she had a tip-off that it would have an effect, then fair enough; but she should note that and add credence to her story.
People have, I gather, but have not been banned - but it could be she's a special case, her contention appears to be her account was being closely monitored -- I think she's pitching the idea that WeChat is so integrated with CCP that they even allow them access to your plaintext password.
For the people who don't know what this means. WeChat is saving the passwords of all its users in plaintext. Which means the company and their employees can see your password. Which means CCP could use this password to gain access to your other accounts
No you can't assume that, someone in the reddit chat had a more reasonable explanation:
- password goes through filter check onSubmit and some flag is set on the account immediately, it's added to a queue, pw is hashed and stored
- "account moderation" worker picks up task from its gigantic queue of Chinese accounts that need some automated action taking on them, bans account, notifies user, does whatever else needs to be done when closing an account for a service like WeChat
Edit just to remark: a lot of people commenting on this thread are making some pretty big assumptions about both what apps do do and should do with passwords.
In my experience, you can more or less say this: most companies and applications in 2020 do hash passwords before storing them in the database.
I'd bet it all. The CCP is like God within the borders of China. They have omnipotence and omniscience.
To be clear, I wouldn't bet it all that the passwords are stored in plaintext. But I would bet it all that the CCP has their own special key and/or backdoor access which allows them to continue having omnipotence and omniscience while keeping pesky foreign powers out.
After my initial laugh at your response, I realized 7% is about what I'd bet too.
My first foray into options trading I lost around 3% of my net worth, and I'd say I'm more than twice as confident about this than I was about that.
I'd evaluate the odds of the CCP doing something, to be in line with the odds of them benefiting from doing something, regardless of the expense/risk to their populace.
There's nothing I'd really put past them, we know for a fact they harvest organs from political dissidents, but we're skeptical on if they'd store passwords plaintext?
Given people tend to re-use passwords, I'd imagine having a massive trove of plaintext passwords for all Chinese citizens, or even anyone who communicates with them, would be incredibly useful.
Not to mention the fact that they have to maintain a list of anti-CCP passwords, which would be a tedious process, or they'd have to automate something to detect anti-CCP sentiment. I think an interesting experiment would be to see what less obvious anti-CCP passwords get you banned. With enough probing and data, I'd possibly increase my wager to 10%.
As a well known and outspoken critic of the CCP, she might be elevated to the status where they actually just have a person reading everything she types into WeChat 24/7. Do you think they fully staff the night shift, or would the ban have taken twice as long outside of Chinese business hours?
Well it depends on what you mean. I'd say there's basically a 100% chance they can access your account. In a properly designed system for this, there would be a feature for allowing certain admin users to log in as an arbitrary user and access all of their information as them, without ever seeing or typing their password, so the actual password is kept secure and there are logs of which admin "ghosts" which user accounts when.
Would I bet that the Chinese Government has this properly implemented and can't access passwords once set? Yeah no way.
Well it's wrong, and stupid under my Worldview .. but from the point of a fascist dictatorship I can see how it seems reasonable. Catch people who think they're doing something hidden and who appear to have disdain for the state machine.
Like choosing business associates based on their politics, just at a larger scale.
that the password passes through any filter check other than refusal for being insufficient in strength should be a red flag to anyone. that any site would flag passwords for review should be the last flag you ever need to know to not trust the site.
That doesn't have to be the case at all. They could send the password (plaintext, hashed or otherwise) elsewhere to get checked that just takes a little bit of time, and get some form of positive/negative response back. Or any number of similar alternatives. It's still bad, but let's not jump to conclusions.
The OP is Bethany Allen-Ebrahimian, a China reporter that's likely on the CCP shitlist. Most likely there are actions specifically targeting her account that she's conflating with general policy. Also she could... just be embellishing. As someone who follows the space, her reporting is occasionally very questionable. But my money is her account was being monitored there's a trigger to ban if she takes account changing actions. This way CCP can slowly weed out foreign reporters instead of blanket ban.
Is it more likely that they want people to have nice passwords so set up filters to make sure, or that they know everyone’s password because they want to be able to see what all of China is saying to each other? I’ll continue to believe it’s the latter unless I see a better explanation.
It's highly unlikely they need folks' passwords to "see what all of China is saying to each other". I'd fully expect the Chinese government to have full access to that, without any need for a password.
Unless they assume the average Chinese user is just like the average user everywhere else on the planet which tends to reuse password in multiple locations.
I think the fact that she is a western journalist who speaks out against the CCP makes a reasonable explanation that her account is more 'watched' then the average account.
What makes more sense, a platform known for censorship and asshatery censored someone, or a journalist who's income relies on her reputation made up a small largely non-story that won't earn her any money but will ruin her reputation if it's proved to be false?
A caring dev sneaking in a blacklist to reduce the risk of physical harm for users who unexpectedly find themselves in a rubber hose attack?
Surely the least likely of all possible explanations, but an easter egg blocking passwords that are variations of "I refuse to cooperate" would be a hidden artistic statement in its own way.
Would you trust the third party that flagged this as offensive:
F*ckCCP89
Edit: given that her account was permanently deleted after just 45 seconds, I actually think some party member working at WeChat is monitoring her activities in real-time. The password probably get him angry enough to push the permadelete button.
Actually, the timeline indicates to me that it was automatic. Considering that they wouldn't assign someone solely to watch one journalist's account for infrequent changes, I think it's unlikely that any human saw it in the first few seconds after it happened and took it on themselves to take irrevocable action in the next second after that. My feeling is that queueing delays of various sorts took up most of the 45 seconds, but I would love to hear better ideas on that point.
Would a native Chinese speaker even have that visceral emotional reaction to English profanity? I'm curious about how that impact translates.
They don't need to have a visceral emotional reaction, they just need to know it's a strong anti-CCP sentiment. Given how frequently we use Fuck ____ in English for things we don't like (Fuck Cancer, Fuck the police, etc), it's a pretty obvious one.
I'd also assume they'd assign the english speaking North American dissidents to a monitoring person who speaks good english.
Why? China does flag people for monitoring 24/7. Is it hard to believe that in China where the party values stability over everything else that they would not have ID people that they feel post / report unfavorably on the CCP as someone to be tracked / watch by a human at all times? The Chinese state security apparatus is quite good and has near unlimited budget and man power.
This is wrong for multiple reasons: they can check whether the password is offensive before hashing it and they wouldn't need your password to access your account anyway.
I'm surprised that's not being raised before. Tons of passwords in the web are still 100% plaintext on the other end of TLS connection.
And then people getting surprised from where do those ginormous plaintext password leaks come from.
All kinds of popular online forum engines were being hacked for password captures since times immemorial. PHPBB still uses server side hashing for example.
Now, for people concerned, take a look who was the party who sank crypto forms at W3C.
Not saying this isn't important data, but at some point does 2FA make this an innefective method to spy on your citizens? If I have 2FA on my Amazon, if the CCP tried to get into it I would just get a notification with a code, and do nothing except maybe change my password. Additionally, there are probably all sorts of account logs saying "this is who logged in when from what IP address" that are associated with a lot of these accounts.
Direct access via the companies themselves is probably much more valuable today.
SMS-based 2FA is pretty weak, I think you can reasonably assume that a resourceful government adversary can silently divert SMS codes intended for your phone to their systems.
In the case of China in particular we know that part of the "Great Firewall" have IP addresses associated with Chinese residential ISPs, whether those are "hijacked" or the relevant agency just asks nicely we do not know. So it may be that "Chinese central government intelligence agency" and "My neighbour's WiFi" are similar IP addresses if you live there.
But yes multi-factor authentication can reduce the impact of credential stuffing attacks.
At some point all passwords are plain text, be it on the client or whatever, they could simply check it before it is encrypted and stored, even on the client end if they wanted to.
Anyone remembers seeing green or red indicator on password strength (min num of special characters, digits). All done at the client, well, letting one to correct before accepting.
In the OP case it could be many factors added together that led to the banning.
This underscores how precious and fragile the freedom of speech is.
I'd be surprised if they didn't have a rainbow table of all weak passwords. The addition of offensive password checking and the ability to ban users based on their content is what's novel and alarming in this case.
If they’re salting like they should then rainbow tables aren’t useful. They would just have a plaintext list of weak passwords and do a direct lookup. Rainbow tables are just a compression technique for hashed password lookups which wouldn’t work with salting.
This could just be a coincidence. There are multiple people in the thread claiming they set their password to the same thing and have suffered no consequences.
Don't spoil the party. It is much more fun to pretend that WeChat stores a billion passwords in plain text just so some bureaucrat from the CCP can check if there is anything offensive there.
you don't need to store it in plaintext, this was probably caught by something general like the great firewall or golden shield then traced back to her wechat account
I know it's very fun to dunk on China these days, but I'd recommend that everyone take a step back for a second...
As has already been stated, multipled people have tried what she did (myself included; just tried with a spare SIM)and we have not had our accounts banned.
WeChat also has little reason to ban someone a private password because that can hardly be considered a communications risk (it's not like her password is being publicly posted for everyone to read). It seems much more likely that her account was closed for reasons outside of this password change.
This twitter user [0] asked a few friends to replicate the process and none of them were banned. People are theorizing that an international WeChat account that hasn't logged in for a while and then immediately changes the password after logging in trips automatic fraud checks as it's quite common for criminals to hijack international accounts (which have looser authentication methods than Chinese accounts).
Are we really surprised, given China’s death grip on the app? They train their anti-censorship algorithms on user conversations outside the great firewall. Even having the thing installed on my phone is too far
Then again, this could be a "brilliant" move to get people to out themselves and worm out dissenters.
Close out accounts randomly and see if someone tries to rationalize it in a disgruntled manner. If you're a Western agitator you'll complain. If you're a proper patriotic member of the CCP you'll understand that it's all for the good of the Party.
Jesus. Did I just write that?
What a world we live in where that isn't unreasonable. Then again I really shouldn't be going and giving places ideas I suppose.
It's Bethany Allen-Ebrahimian, she did work on the China Cable expose on XJ camps. Also one of the louder voices in the growing "anti" China twitter clique. Her work gained a lot of MSM traction in the last few years due to... new geopolitical realities. All this is to say, I'm surprised she wasn't banned from Chinese social media already. I wouldn't be surprised if her account is on some automated watch list with various conditions to trigger bans that doesn't apply to general accounts. Hence:
>Fwiw just changed my wechat password to the same one bethany used just to test this, continue to be able to use it without incident.
- open up private browsing
- press F12 (or however you get the developer console on a mac) and go to the networking tab
- go to gmail.com say
- enter your gmail credentials
- look at the post request generated, and at the request tab, it will contain your password in plain text
So passwords don't get hashed on transit, this is why having HTTPS is so crucial, which is to prevent someone in the middle (say when you connect to an open Starbucks wifi) from sniffing out your unencrypted password. The password on the server side initially can be unencrypted before it gets hashed to be stored into the database. So in this instance, the password in the database is hashed, but there is a small period where the password is plain text in memory.
For a site called hacker news, it's really sad how little people here know about hacking.