I mentioned this on Twitter[1], the reason to encrypt before sending over SSL is not about double encryption, it’s about how a large backend system is designed.
Often, you have load balancers that are SSL endpoints, so the data is decrypted at that point.
You can start to see the problem already. What if there is a different bug, and so a dev starts logging requests somewhere down the line? You accidentally start logging cleartext passwords. Oops. Facebook was fined for this not that long ago.
But if the password is encrypted, then it’s not really an issue, and the black box blob can be forwarded to a login microservice. There, the team decrypting will be on higher alert.
So depending on the structure of various teams, you now have fewer teams that need that kind of security oversight and can move faster.
However a better approach to this problem is to not rely on shared secrets. Use public key signature tech to stop worrying about mistakenly logging a secret. If you never had it, you can't lose it.
If you were to log literally every byte of the plaintext traffic when I sign into GitHub (e.g. maybe you're a GitHub ops person), you don't get the ability to sign into GitHub as me. There's a WebAuthn signature step, my signature is authentic, and you can even verify that from your log if you want, but you'd need to make a new signature to sign in, and you can't do that because the key needed to make my signature never left my hands.
Even better, GitHub defuses their liability because as well as a (presumably hashed) password that could be broken by a hypothetical attacker they've got a public key for me, and learning that public key doesn't help the attacker do anything, at their site or anywhere else. Even - unlike with the SSH public keys GitHub holds - to identify people, since WebAuthn public keys are deliberately uncorrelated you can't match my GitHub key against a Facebook key for example.
Yeah, I do hashing client side as well as server side. It's not ideal - what would be ideal is zero knowledge proofs for such a thing. But it's basically:
hash(static-pepper, username, password) * 250k
That + tagging the password with "password++" or something means that you're a lot safer against the major issue of leaking a password before it's stored, for example the mistake that definitely happens everywhere of "let me just add request logging, whoops there's everyone's plaintext passwords". You can always search your logs for the 'password++' tag and alert if you find it, and if that does happen at least you know an attacker isn't going to have an easy time extracting a plaintext password - it buys you time.
And if an attacker gets SQLi or whatever and dumps the passwords they're that much harder to crack - you've added hundreds of thousands of iterations of key stretching, and it's totally distributed to clients so you don't even have to worry about it blowing up your db/ auth service CPU.
And it's trivial to implement, which is the really important part. ZKP is a lot more work, but what I described is like 5 extra minutes and pretty trivial.
Though this is obviously better from a wire-interception PoV, it means that you can't enforce any password policies, or maintain a list of leaked/bad passwords (e.g HIBP)
You can enforce password policy client side. A technical user can go way out of their way to bypass it, but honestly, at that point who cares? If you really want to you can send up some metadata or something I guess.
Another option is to send the password at user creation time, but then to rely on a hash at login time. Now there's leak potential, but it's you just have to audit for leaks in one part of the codebase.
There's a lotta stuff you can do to improve upon the very quick version I'd mentioned.
It's definitely not equivalent. The plaintext isn't (as easily) recoverable, which means that if the user used the same plaintext password for another service it's (somewhat more) protected.
Pass The Hash is also protocol specific - if you try to replay a hash to your average HTTP service it won't go "oh, it's already hashed, thanks" it'll just hash it again and you'll fail to authenticate.
> Then just add a time sensitive seed to it? I don't think it is equivalent to leaking plaintext. It can't be used to guess passwords on other websites.
You're reinventing password hashing and salting. Further, there's no guarantee that the has cannot be used to guess passwords on other sites. For what benefit, exactly? Your hash is now the password, and basically as dangerous as it was in a more conventional arrangement.
Pass-the-hash is a real kind of vulnerability that has been used to exploit real systems. We might be better off sticking with design approaches that don't have this problem instead of trying to fix out way out of the problem.
> If your SSL layer is compromised, you can't trust the client-side encryption. The attacker can send arbitrary javascript.
Are you sure this is what it's guarding against? A sophisticated application architecture might involve a load balancer decrypting and doing the initial routing, several sets of data handoffs, and then the application that needs it handling the password. Any one of them could mishandle or leak the password, but only the one at the end actually needs it in the clear.
How exactly is asking for my password to be hashed "reinventing password hashing and salting"? Seems like the opposite, no?
If your password is properly salted, it can't be used to guess passwords on other sites, that's the whole point of salt and hash.
The fact that RSA is being used means that your plain-text password is going to appear on their servers. Maybe it won't get cracked in the SSL layer, but it is still there.
> Are you sure this is what it's guarding against? A sophisticated application architecture might involve a load balancer decrypting and doing the initial routing, several sets of data handoffs, and then the application that needs it handling the password. Any one of them could mishandle or leak the password, but only the one at the end actually needs it in the clear.
Do you realize that if an adversary even only has read access to the SSL layer, they can just copy the cookie and steal the account that way?
> How exactly is asking for my password to be hashed "reinventing password hashing and salting"? Seems like the opposite, no?
You've already started to add new things, like a TOTP-ish element, to stymie replays. Then the server has to check what it's been fed, having stored neither the original password nor the hash of the password it's been passed. It cannot be allowed to have the hash because the has is now the password. It need something safe-ish to store that the input can be computed on to make comparisons possible.
Now you have all the problems of server-side hashing and comparison coupled with extra client-side hoops.
Again, what have you gained?
> Do you realize that if an adversary even only has read access to the SSL layer, they can just copy the cookie and steal the account that way?
You are absolutely correct. That is completely accurate in every single possible way.
Do you think that perhaps there might be other reasons to consider here? Such as debugging, logging systems, and so on? Perhaps there are design goals beyond blocking direct attacks. On an average day, most of these systems will be more likely to be accessed and used by authorized administrators than by external adversaries, after all. Many security incidents arise not out of malice, but out of tools behaving dangerously. I know I've dealt with sensitive material leaking into logs.
I hope I have made myself clearer. I can see I failed to communicate effectively previously. Please, don't hesitate to say so if I have failed either there or in understanding your points.
> You've already started to add new things, like a TOTP-ish element, to stymie replays. Then the server has to check what it's been fed, having stored neither the original password nor the hash of the password it's been passed.
I don't see how this is any different from Valve "reinventing" SSL by using RSA. This isn't a new concept or roll your own crypto. I just don't want my password to be in plain-text. The only thing the server should get is the hash. If you are using RSA on the password, that means your password is going to be in plaintext on their servers eventually.
> It cannot be allowed to have the hash because the has is now the password. Now you have all the problems of server-side hashing and comparison coupled with extra client-side hoops.
The original password? Ok go ahead and encrypt it (and also hash it). But only do it once and not every login.
> Do you think that perhaps there might be other reasons to consider here? Such as debugging, logging systems, and so on? Perhaps there are design goals beyond blocking direct attacks. On an average day, most of these systems will be more likely to be accessed and used by authorized administrators than by external adversaries
It just seems strange to me that you would have trouble trusting your administrators to accidentally leak logs. What happens if they need to debug or log the app server behind the SSL layer? How likely is it that the dumb SSL termination layer is causing a problem and not the app layer?
> I don't see how this is any different from Valve "reinventing" SSL by using RSA.
What Valve is doing is layering cryptography to make it possible to do access control more granular than a pure binary. What you are proposing is layering hashing in a way that does not introduce access control.
I hope this is clearer!
> I just don't want my password to be in plain-text. The only thing the server should get is the hash. If you are using RSA on the password, that means your password is going to be in plaintext on their servers eventually.
OK. Now your hashword is on the server. Your hashword now has essentially all the properties of your password that you were trying to get away from in the first place. It is de facto your password now.
Sure, it's a different string now. Your goal is accomplished and the server doesn't have your magic string. Instead it has a different one that's equally magical, equally came from you, and equally security-sensitive.
That's why pass the has was brought to your attention. Network security has been down this road before. We had login systems where the server only ever had a hash, and never the original magical string. They were not safer.
> The original password? Ok go ahead and encrypt it (and also hash it). But only do it once and not every login.
OK. So now we have a new thing that's identical to a password in all of its properties, except that it requires extra work client-side. Again, what have we gained? This is not an idle question, it's the only one that matters.
Valve has an answer, and it's a reasonably compelling one that speaks to real-life use-cases.
> It just seems strange to me that you would have trouble trusting your administrators to accidentally leak logs.
I routinely see developers do things like dump full web request contents into logs without thinking about consequences. With that in mind, I think it's very possible that exactly this kind of scenario could happen. Indeed, I think it a virtual certainty.
The plaintext of passwords being stolen from the memory of servers that handle logins is a comparatively uncommon event, as far as I know.
> What happens if they need to debug or log the app server behind the SSL layer?
A great question! That's critical info. In this scheme, they can. They just cannot access the most sensitive contents, such as passwords. Literally every other part of the request has been decrypted and is available for use.
You'd actually end up hashing it twice. Once using the salt to go from plaintext to what the sever has stored and then again using the challenge.
It has problems though. The strength of your password hashing would be limited by what the weakest client could do, rather than what the server could do. Asymmetric encryption ends up being simpler.
At some point we must simply “know what we are doing” and do something. Software would be a tedious affair if you have to approach everything from the perspective that maybe you don’t really know what you’re doing and need to implement some way to verify.
What are unit tests, linters, SAST tools, and more all for? Naively, it seems to me that we swim in a great sea of tools to verify that we do, in fact, know what we're doing.
An excellent question! One reflecting true wisdom.
The answer is that you never have true certainty. What you get is smaller error bars with each set of tools and tests until you find the error bars and effort involved both acceptable.
The other comment mentions tests, but I'd like to that to that. The reason why it's impossible for anyone, even the single greatest programmer on the planet, to not create bugs, is due to the high complexity of the software we write. It's not so much that you'll make a mistake when creating a new program. It's that over time, as you make more and more changes, it's impossible to actually trace out the exponential complexity that arises from your changes.
The purpose of unit tests is to catch these, and yes as you mention lower down even those aren't 100% infallible but they greatly help. That's why even with some of the greatest programmers and extensive testing, we still constantly see major bugs from every single top tech company. I don't think there exists a single completely bug-free software of non-negligible size out there.
And, adding assertions / very loud logs is a good way to be sure your assumptions hold: e.g. causing your tests to fail if the test user's password (or other privileged info, etc) appears in any logs is a fairly good extra safety layer.
I've heard about the security of a mobile banking app about eight or so years ago, when mobile banking was still very much a new thing and trust was a big issue. They too opted to write a second encryption layer on top of TLS / SSL, fearing MITM attacks. That was also when iOS didn't support SSL pinning yet.
It seemed to work for them, security researchers went to town on it and while they quickly discovered there was a second encryption layer below SSL, they were unable to determine what it was and how to crack it.
IIRC their encryption was never broken, and thanks to that track record, they slowly increased daily spend limits over the mobile app.
Long term, because they were very forward-thinking and they had competent native app developers (as opposed to the competition who struggled for years with mobile web / crossplatform tech), they increased their market share by a lot, now being the largest bank in NL; can't find historical data, but they went from 37% in 2016 to 40% in 2018.
this approach tends to be a self-own. Bug bounty and responsible disclosure folks don’t usually have expertise in it, so you’re just making the real attack surface less visible until someone with the expertise comes a long and owns you deeper than you could have imagined. This, ironically was also the case for steam: https://steamdb.info/blog/breaking-steam-client-cryptography... (I helped make this bug)
There is a misconception that the responsible disclosure system reflects real security threats, but it unfortunately doesn’t. The areas of expertise in the real world are different, and sticking a bunch of crypto in like that tends to be a case of making your eventual problems more complex, bigger, and harder to find.
I humbly disagree - there are times when application-level crypto makes sense. E2EE in particular relies on application-level crypto, and any password manager or "secret server" that relies only on TLS is highly suspect, in my opinion.
I attended a talk at QCon SF 2019 where the speaker advocated use of application-level encryption (even in web apps) by default as a form of defense-in-depth. I was skeptical, especially since client code itself has to be delivered using TLS only (and thus a successful TLS attack renders further defenses worthless).
But it does seem that application-level crypto allows a lot of networked devices (besides the application servers themselves) to be rendered zero-knowledge concerning most application and user data. This allows us to trust those devices less.
I never made the argument that no application should ever encrypt anything at the application level. Password managers are very niche uses.
"secret servers" or key-management systems are just fine using TLS imo. I helped draw up the design for what is now the aws secrets manager. At the point you're reasoning about TLS being broken you might as well be focusing on detection, monitoring and key rotation because the whole internet is coming down with you.
> I attended a talk at QCon SF 2019 where the speaker advocated use of application-level encryption (even in web apps) by default as a form of defense-in-depth. I was skeptical, especially since client code itself has to be delivered using TLS only (and thus a successful TLS attack renders further defenses worthless).
defense-in-depth is a weasel word. security isnt a bunch of layers that can be reinforced as such; often things that are referred to as 'defense-in-depth' cover specific characteristics of one protocol over another.
For example, in password managers as you mention you end up with application level encryption. But this is because TLS doesn't provide the guarantees they want of server blindness, and some resistance to bad TLS certs installed at the OS level (a dubious security boundary, but regardless...).
> But it does seem that application-level crypto allows a lot of networked devices (besides the application servers themselves) to be rendered zero-knowledge concerning most application and user data. This allows us to trust those devices less.
This may well be true in niche cases. In reality, there's not a lot of businesses out there who will overengineer their systems to know less about their users. zero-knowledge applications are unbelievably subtle to make really work, and it's very easy to fool even experts in adjacent security fields that an application is zero-knowledge in some respect when it isn't.
I'm digressing. Application-level encryption in 2021 is very rarely what's wanted, and the existence of niche security cases in entirely security-focussed products doesn't discount that.
One thing I still want to say — when I speak of secret servers that only use TLS, I was thinking primarily of products like Thycotic Secret Server, where deployment is left to on-prem IT staff, no E2EE exists at all, authorization uses code branching not security primitives, and where “vaults” are databases with DBMS-implemented encryption at rest. Security-wise, a hand rolled solution actually could be better if written by an experienced non-security engineer.
I’m not familiar with AWS secret server, but I assume it’s deployed such that it inherits a lot of guarantees by virtue of being part of the AWS ecosystem.
There was a UK retail bank that tried a strategy like this.
I maintained (and still do maintain) it was security through obscurity and a waste of engineering effort that should've been spent on actually hardening the banking API server and migrating it to a modern stack.
I thought it inevitable and indeed - it got cracked twice anyway (despite the use of Arxan, extensive anti-debugging functionality and rewriting the crypto on at least one occasion).
Disagree that hardening the API server is any better. This is the approach common in the US market, and my team has broken everything available there too. Also disagree with insinuations that these banks don't have good, modern stacks. Barclays in particular is great. Way better than any challenger bank.
Lloyds also took a similar approach to Barclays but they did a better job than Barclays did (although Barclays did a great job themselves too) and so we never got around to finishing it before we pivoted to the US market. As far as I know it's still unbroken, although I'm pretty sure my colleagues could easily break it today. We've since developed far more sophisticated reversing techniques.
> Disagree that hardening the API server is any better. This is the approach common in the US market, and my team has broken everything available there too. Also disagree with insinuations that these banks don't have good, modern stacks. Barclays in particular is great. Way better than any challenger bank.
By "hardening the API server" I mean fixing actual security vulnerabilities and improving the security posture of the API gateway, not going for further obfuscation layers or attempting to prevent third-party clients. Those are a waste of time. My position is that there's no point trying to prevent the user's access to his own data - but there is a point in enforcing e.g. access controls so customers can't access data for accounts they don't own or spend money they don't have.
When you talk about "breaking" banks in the US space, are you referring to gaining access to the API and reversing it (which has always been Teller's MO, no?) - or finding vulnerabilities with the API endpoints with actual financial implications for the institution?
> Also disagree with insinuations that these banks don't have good, modern stacks
I'm aware of your thoughts on this, though I respectfully disagree with the "modern" characterisation you have applied to legacy banks based on the sorts of tech I've seen and how it e.g. coped when faced with such exotic things as non-ASCII characters.
Monzo have at least never wasted time on obfuscating the fuck out of their API comms, nor forcibly preventing me from accessing my transactions on my rooted device or running their app under a debugger.
With respect I think you're speaking outside the bounds of your knowledge of these systems.
> By "hardening the API server" I mean fixing actual security vulnerabilities…
That is just table stakes. Have you ever found any vulns in bank API gateways? We tested authorization boundaries with our own accounts, we didn't ever find a bug like that. I found a total of two bugs quite low impact. One was unsafe object deserialization that could potentially lead to RCE, we obviously didn't try this. The payload was signed so it most likely would have been difficult to exploit provided the signature was verified before deserializing the object. The other one was an authentication bypass, which potentially could have given you read only access to the user's accounts. You could then call up customer services and use recent transactions on the account as a knowledge based 2nd factor to be given a code to upgrade the read-only enrollment to write access. This would require some knowledge about the customer (account number, telephone number, etc) and sloppy CS, so would probably say that was also low risk. We reported both to the respective bank via internal contacts.
> When you talk about "breaking" banks in the US space…
Everything this thread refers to countermeasures banks employ to keep third parties from leveraging their private mobile API gateways. When I talk about breaking things I'm talking about breaking these countermeasures.
> I'm aware of your thoughts on this, though I respectfully disagree with the "modern" characterisation…
The content encoding of the underlying persistence layer is a tiny part. Their technology is broadly speaking very good. I am probably the world expert on the state of these systems because I have very deep knowledge of a large number of them, whereas even bank employees would only about their employer's systems.
> Monzo have at least never wasted time on obfuscating the fuck out of their API comms…
Monzo did other stuff to effect the same result, i.e. they only allow one device to be logged in at a time. So you couldn't use Monzo's app AND access your account via Teller at the same time.
I wouldn't use Monzo as an exemplar of technological capability. Barclays absolutely smokes them.
> With respect I think you're speaking outside the bounds of your knowledge of these systems.
I'm not going to argue with you on this; you're entitled to your opinion :)
> That is just table stakes. Have you ever found any vulns in bank API gateways?
Infosec consultancies find vulnerabilities in the API gateways of retail bank clients, yes. That's one of the things they are paid to do. And yes, I found issues myself in part of the open banking consent flow I was asked to test for a retail bank when I worked for such a consultancy. Such features don't get built without flaws.
The folks who are smart enough to reverse the mobile apps, any crypto used and write their own client get to play with these features in prod after they've been tested (either internally or via a contracted pen-test firm).
> Everything this thread refers to countermeasures banks employ to keep third parties from leveraging their private mobile API gateways. When I talk about breaking things I'm talking about breaking these countermeasures.
Then we seemingly agree these countermeasures are not effective - which is the point I attempted to express :)
> whereas even bank employees would only about their employer's systems
I have never been a bank employee so don't personally know, but presumably some of them move between orgs.
> Monzo did other stuff to effect the same result, i.e. they only allow one device to be logged in at a time. So you couldn't use Monzo's app AND access your account via Teller at the same time.
Only if you were trying to impersonate Monzo's iOS/Android OAuth clients instead of the expected AISP/PISP flows.
I remain convinced that rewrite was a purely anti-competitive move to stop/slow Teller and others. As someone that worked on a startup that required API access to transaction data, it was a huge pain in the ass having to write and rewrite screen-scrapers.
Security through obscurity isn’t an invalid technique if you also are doing security through cryptography too. One reduces your attack surface, the other reduces the number of attackers by increasing the required effort - both work. In part this is why many IoT devices also attempt to physically protect the underlying microchips, why HSMs destroy keys when enclosure tampering is evident, etc.
> Security through obscurity isn’t an invalid technique if you also are doing security through cryptography too
It becomes a distraction vs actually writing a secure set of endpoints in the first place.. Folks get a sense of security from it which is entirely false.
The funny thing is, before the current ING app, there was a super crappy app that used an MSN chat bot on the background to query your account balance.
But indeed as you describe, since 2012 or so, the ING app is formidable and built by amazing people.
Source: I work at ING, in IT, but in a completely different area.
Funny you should say that, I'm looking at my ios codebase from 2010 for an eastern-"ish" european bank that hired us to build their first banking app. They wanted a completely custom encryption layer in their app for all communication. I don't know if it was a case of not trusting the current tech standards, or a case of believing their engineers were better then everyone else. They were in charge of the SecurityCenter framework, we did everything else.
> "Het authenticatie protocol ziet er goed doordacht uit. Er wordt niet vertrouwd op SSL of TLS. In plaats daarvan gebruikt ING een extra encryptielaag waarvoor het wachtwoord wordt afgesproken via het SRP protocol. Ook genereert elk mobiel device een eigen profileId en een public/private sleutelpaar", merkt Van den Berg op.
In English:
> "SSL/TLS isn't trusted, instead, ING uses an extra encryption layer the password of which is negotiated using SRP. In addition, every mobile device generates an own profileId and a public/private keypair"
Don't know if op was indeed talking about ING, but their app was, for a time, very wrong on Android as they seemed to have rewritten it on a Cordova/Phonegap stack which subsequently tanked their rating on the play store. Looks like they have released a new native version since then - at least on the french store.
For a bank, increasing national market share by nearly 10% in just a few years is a pretty incredible feat. People don't generally shop around banks for checking/savings accounts, and the switching cost is very high and a pretty manual process (at least in the US, may be less onerous in NL).
Switching-cost in US can actually be negative - many banks will pay you hundreds to set up an account and get direct deposits to it for a small (~3mo) period. Payroll software is generally happy to split deposits, so this isn't a real barrier to entry.
The real barrier is who wants to bother switching banks? It's new UI to learn, new passwords, new apps, new cards, new exposure to security flaws, etc etc etc. I don't think that's any different in US vs EU.
Yes, bait-money exists in europe too. Because of which some people are constantly moving around their accounts, or just have multiple accounts for different purposes.
Until 2018(?) this was really simple, because there where good APIs for online-banking available. All you did was adding a new account to your software and call it a day. But new security-rules for EU kinda killed them off, and banks are putting up more barriers against bank-hoppers. So at the moment it's a bit in transition.
I keep being somewhat baffled by Steam's login process every time I'm forced to go through it. Apparently Steam is such a cesspool of (pre)pubescent teenagers, with rampant account hacking and theft of funds, swag or whatever, that they feel the need to fortify the process if only to make it more inconvenient for the hackers.
- “Remember the password” barely ever works, even on desktop. Since I don't quite log in every day due to being too old for that, I have to redo the process every time—on a machine that I bought with my own money just for myself and intend to protect with both technical means and physical force.
- Somehow copy-pasting passwords from KeepassX/XC doesn't work on Mac, with the shortcut. Not sure if this is a misfeature of Steam, but I have to paste the password to an editor first and then copy out of there into Steam. (Seems though that ‘paste’ in the context menu does work—this might've changed since I first noticed the issue.)
- And of course, the weird variation on 2fa, via email, instead of the good regular TOTP. As is tradition by now, I'm also given the choice of installing yet another app on the phone, which somehow doesn't quite seem to serve my interest.
Hmm, what I mean is, I open up Steam on my Linux system. Usually it remembered my login but sometimes I need to login again. If I then type my password, it says: "type the code we emailed to continue".
So if I wouldn't have access to that email account, I couldn't login and lose the Steam account, even when knowing the password.
Although, some of the methods from the link would still work, so that's solved, I guess.
Except that even if you use Steam Mobile you can't turn off email-based "self-service account recovery" in Steam. Your email account is always going to be the final key to control of your account.
Which is why my email accounts have warned me that every time a botnet cracks my Steam account password there are attempts to open what they think is my email account with the password they just cracked. My Steam account password these days is cracked scarily often, and I'm afraid my Steam account is now one of my weakest links in my online security footprint. I'm not dumb enough to use the same password for my email addresses as my Steam account, but the fact that Steam seems to be allowing password spray fast enough that machines keep cracking 50+ character passwords in days is alarming.
(ETA: Note the reason I mention 50+ is that I specifically vary the length randomly; when I don't the cracks drop to hours apart.)
I'm curious about what specific thing is signaling to you that your Steam password has been cracked. (I assume you mean brute forced?)
It's significantly more likely that you've been keylogged or phished if attackers are actually accessing your Steam account with passwords of that complexity.
I don't understand how it can be possible to brute force a 50+ character password
with 5 bits per character (and assuming random characters, which is what you mean right?), that's 300 bits of entropy, nothing in the universe could brute force that
Most of those old password-length "time to crack" estimates are based on a single machine. Many of the common ones you see today are based on the added assumption that they aren't spraying directly at a password endpoint but are instead predicated on breaking the hashes and the extra (increasingly minimal in the age of Bitcoin) cycles needed to hash/salt/pepper the passwords and/or building rainbow tables.
I believe that the password spray capabilities of today's botnets on any endpoint that returns results as fast as network messages travel should not be underestimated in a distributed enough attack. Given that not-varying the password length had a noticeable impact on time, the warnings from my email providers, and other increasingly paranoid measures I've taken [0], I have no reason to suspect that this anything but a very distributed password spray attack.
Simple GitHub searches seem to indicate that there are known password spray capable Steam endpoints that currently still leak password correctness/verification data regardless of 2FA enabled (and also leak whether or not 2FA is enabled on the account) and always falls back to email-based 2FA. (These leaks and that fall back would have me believe it's one of the Password Recovery or 2FA Recovery endpoints.) Though I've not attempted to run such gists/"utility libraries" myself to verify (I'm too lawful neutral/not a black hat whatsoever), at a surface level it seems like more than enough evidence to suggest botnets would use such things if enough people were posting "helpful password recovery tools" on GitHub that password spray accounts you tell it to.
[0] The paranoia has gotten quite "fun":
- I only ever sign in to Steam now inside the Steam client and Steam Mobile app.
- I disabled all OAuth applications on my account, no longer sign in under any web browser, and have refused to allow new applications.
- I've removed all devices except my primary gaming desktop and mobile device.
- I've removed all credit card data that I can and haven't bought or paid for anything directly in the Steam client in years.
- There's evidence that password hashes used to be leaked from a file in the Steam client's folder. (I believe that file no longer exists in recent Steam clients, at least.) For that reason, I've turned on Windows Controlled Folder Access (aka Windows Ransomware Protection) on all of my Steam folders. This has been an amazing bundle of joy~ and has basically stopped me from playing Steam games. Games are developed by children and it is amazing the number of entry point binaries a single game might have to run, how often even "offline only" games still want to run binaries they copy or bury in random places in %LocalAppData% or worse %Temp%. The whack-a-mole to enable games to run under Controlled Folder Access becomes its own very not fun minigame before you can actually start the real game. (It's also really interesting to see what some games do when they fail to get folder access they just assume they'll always have. So many permutations of "the game works but crashes at weird points" or "the game thinks it is running on a Mac for stupid reasons" or "the game thinks you intentionally want to run it without the ability to save or load saves, because that's a thing people might do?".)
My paranoia suggests my next steps are only to isolate Steam to its own entirely separate user account on the machine and/or its own unique VPN.
My basic threat modeling assumes if they were compromising anything specific outside of Steam, they'd have compromised my email accounts already.
At this point it increasingly feels like the only reason I keep Steam installed is to reset the password every time I get a Steam Guard email.
I have no stakes in defending Steam, but—you realize that if someone were cracking passwords left and right for years then the web would be full of complaints like yours? Everyone would know that it's a thing that's happening. Eternal questions would be pondered to the sound of Guard notifications, lovers would gaze at stars with faint notificationing in the background, and musicians and poets would compose songs to that tune.
Frankly a keylogger on your laptop sounds more plausible.
My belief is that this is a canary in the coal mine. We know that password systems don't work in the long run. There's been lots of reasons to get people away from passwords for day-to-day things. Some canaries are going to die sooner than others, and I have some ideas why this particular canary of mine died early. There are other complaints out there about Steam specifically of hacked accounts where passwords were "guessed" and then email accessed. Additionally, I have a pretty good idea of why, for what I think to be very dumb reasons, my account is a particularly well known to be "high value" account (going back to the parent comment way above that depending on how you value it, my Steam account is worth more than the PC I connect to it with). Steam itself is also in the weird "entertainment" place where it has bank-like features, but not quite the same pressure to have bank-like security (because it's just "games" and "hats"). The article here itself points to things likely written prior to 2012 that are still in active use today in Steam's login path (whether or not you believe age/tech debt implies "broken" most banks have upgraded their login systems likely four or five times since).
(My account is one of the oldest accounts on the platform, predating Half-Life 2's launch, and originally accessed via dial up internet. It has several now rare collections of games and at least a couple now "impossible to buy" games. Most critically to it being "well known" to have such value, it has several of the most rare/valuable "TF2 hats", which I think is incredibly dumb and that the marketplace is a huge gambling mistake, and those were known at the time when all Steam inventories were public [oh, the spam and phishing attempts that generated back when that was public and easily accessible]. My limited regard for the marketplace and limited use of it would make it somewhat obvious if I had "sold them" in the time since inventories allowed going private.)
As for a keylogger, specifically, I would go insane if I ever had to type 50+ character passwords. The keys you will log are Ctrl and V. Sure that opens up questions to clipboard logging and/or Password Manager incursion, but as I mentioned above, I have enough reason to suggest the threat isn't that sophisticated (in part because it is just "games" and "hats"), and paranoid circumvention in place already (even beyond the ones mentioned specifically in the above comment).
Also, there are plenty of reasons it might not be happening as badly elsewhere as it is happening specifically on Steam. Microsoft (and Microsoft Research) has made it very clear in recent papers that distributed password spray (where the spray is spread out over large numbers of IP addresses/countries/etc) is the number 1 issue right now in passwords, and that detection and blocking are crucial. Steam has argued in the past that such things are impossible to do at their scale. (Microsoft would argue today that their scale in Office 365/Azure AD/Microsoft Accounts has easily now dwarfed Steam's scale.) There's enough evidence today (as I already mentioned) that Steam still doesn't have those detections/blocking in place (and are relying too much on Steam Guard/2FA to keep accounts safe). (Not to get too deep into the woods of Steam criticism, but the argument may not be that it is impossible at scale but that it is impossible to prioritize it within Valve's notorious management culture.)
You could attempt contacting steam and ask if they know how many attempts have been made from different IP addresses in total for login to your account. I feel like that's really the only way to verify what you're proposing. Steam likely has logs of all the IP addresses that attempt login to whatever account.
I'm skeptical of what you're proposing because it's not hard to design a system that freezes mass random IP login attempts to an account after 'x' low number of random attempts and then only allow the past successful IP addresses to continue with a successful login. As well, as do an email verification if the password is successful but being used from a new IP address.
I have sent tickets to Steam asking for such corroboration. I've never gotten beyond a "don't worry Steam Guard seems to be working as intended" and general Tier 1 copy-paste responses.
So I figured I'd go read what's ‘password spraying’ that you mention:
> Password spray is the opposite of brute-forcing. Adversaries acquire a list of accounts and attempt to sign into all of them using a small subset of the most popular, or most likely, passwords.
Firstly, you seem to believe that password hashes provide only a small reserve of difficulty compared to the abilities of current computers. That's not so. Just read or watch any introduction on hashes: the most basic principle is that even with a huge cluster of top-of-the line hardware, it would take billions of years to guess a password of a decent length. When hash algorithms are ‘broken’, like with md5 and sha1, it's because newly found weaknesses bring down their strength by a factor of billions.
Secondly, you seem to conjecture that attempting password guesses against a network service would somehow bring that difficulty down considerably, to reachable levels. However: local hash guesses are made on GPUs or specialized FPGAs, whereas servers run on regular multi-purpose CPUs—plus, if you had a server respond to login attempts nonstop, it would spend half of the time in context switches and kernel calls. Top http frameworks in pure C reach just over a million responses per second when doing nothing but sending empty responses. You're asking that Steam dedicate a fleet of thousands of servers to facilitate cracking your password. And on top of that, the service would also need a database that likewise serves billions of requests a second.
Additionally, modern hash algorithms like bcrypt are constructed so that they take considerable and configurable time (on any hardware), so the hashing rates are on the order of tens of thousands a second or less, instead of billions and trillions. Since Steam are evidently very concerned with account security, I'd guess they take advantage of these algorithms—and since you changed the password recently, it was probably hashed with the latest used algorithm.
Besides all of the above, a service easily foils password guesses by limiting the number of attempts against an account in a time span, which is by now one of the basic prescribed measures. The whole purpose of ‘password spraying’ is to sidestep this limitation by attacking a lot of users but using most common passwords. In no way does it help with guessing a single long random password.
Lastly, while it's conceivable that Steam could have some vulnerabilities that would make cracking its accounts easier, those wouldn't be burned by attacking the same accounts over and over for months.
To sum up: the whole magnitude of the task is such that no one would solve it just to steal your trinkets, even if they could. It's time to accept that either your passwords are easily guessable, or are lifted from you in some way.
> You're asking that Steam dedicate a fleet of thousands of servers to facilitate cracking your password. And on top of that, the service would also need a database that likewise serves billions of requests a second.
No, I'm just saying that I believe Steam presumably scaled naturally (through decades of growing usage and also decades of huge scale DDoS attacks) to something like that for other reasons and are possibly missing safeguards to prevent it being misused.
Obviously, I'm making cynical assumptions and failing to give Steam the benefit of the doubt here. I'm sorry.
> Additionally, modern hash algorithms like bcrypt are constructed so that they take considerable and configurable time (on any hardware), so the hashing rates are on the order of tens of thousands a second or less, instead of billions and trillions. Since Steam are evidently very concerned with account security, I'd guess they take advantage of these algorithms—and since you changed the password recently, it was probably hashed with the latest used algorithm.
The article points to evidence that the login system possibly hasn't been updated since around 2012. Plenty of systems were still using unsalted MD5 back in 2012. It's a huge assumption that they've kept up with modern hash algorithms.
Additionally, the SteamGuard files stored in the base client directory were reported to include MD5 hashes at least as recently as 2014. (Even worse that file contained long lived tokens directly able to bypass SteamGuard.)
I hope Steam is doing better than that today, but you can forgive my pessimism/cynicism after fighting this cycle much longer than I would have liked that the conclusions I jump to remain that Steam isn't doing enough to protect account security.
> It's time to accept that either your passwords are easily guessable, or are lifted from you in some way.
I've gone through a lot of paranoia and anxiety over this. I've done a lot to eliminate suspects and shrink attack surface, and continue to do so. So far as I can tell this is specifically a Steam phenomenon, Steam is the weak link in the chain, and my other accounts seem secure accept that my email providers report failed login attempts from the same IPs mentioned in SteamGuard emails shortly after the SteamGuard timestamp.
Anyway, I've expended too many words of paranoia and cynicism in this thread. I appreciate the attempts to help.
> Steam presumably scaled naturally (through decades of growing usage and also decades of huge scale DDoS attacks) to something like that for other reasons and are possibly missing safeguards to prevent it being misused.
Just to drive the technical point home: such scale is basically just not feasible. We're talking literally thousands of servers doing nothing but md5 hashes, to vaguely bring cracking a shortish password into the realm of possibility. No one would set up such a system, any sane sysadmin would investigate the load long before it gets to such scale, and the budget would raise questions. Even if Steam uses md5, every little piece of logic around the hashing function multiplies the CPU load compared to bare hashing.
DDOS protection is done on specialized hardware, again long before the count gets to thousands of servers. You buy a box and put it in your datacenter in front of the balancer servers. In my experience, one box nicely handled load going to about two hundred application servers (iirc), likely with plenty of capacity to spare.
So you can estimate the necessary time just with http responses: 50 alphanumeric characters is 62^50 = 4.16e89 permutations, divided by 7.3 million = 5.7e82 seconds, or 1.8e75 years.
On that four-GPU box from 2016, cracking would take 3.3e71 years—which is considerably better but still doesn't quite fit in the age of the universe. So even md5 stolen from Steam Guard wouldn't help much in the case of a long password (unless some miraculous attacks were developed since 2016).
> So even md5 stolen from Steam Guard wouldn't help much in the case of a long password
(Though, with unsalted md5 or sha1, it's possible to find a shorter collision instead. But afaik it requires executing specific techniques instead of the regular algorithms, and obviously the Steam server isn't doing that, so it must be done locally with a stolen hash.)
However, near as I can figure, it offers no way to provision a second device with the same seed (or store the seed).
It's one of two sites that I use TFA for that I don't have a backup for, which is mildly annoying. I do have recovery codes, and will all too happily fall back on SMS.
For me, when I download Steam Authenticator it's tied to my phone number so the first time I login it will send me a text message code, and then from there it generates the authenticator codes in app
Well in the scenario where you lose access to your email address you would theoretically still have access to your phone with the steam app already installed and authenticated
I'm not sure what you mean by auto-login not working. I've had my Steam account for 11 years and I can remember a time where that was the case, but it works so reliably nowadays I didn't even remember it was an issue until reading your comment.
I'm 90% sure it's a account-based bug. My account has had this issue for ~6 years now (I've used steam for 15 years). It happens on any browser or device. No cookie clearing, doesn't happen to any other account, etc. Every time I bring it up, the majority of people say they don't have an issue, while a small handful of others chime in about experiencing it too.
It has to be a bug, or maybe a security feature for accounts of a certain size?
I think they restrict the 2FA methods since they want tighter control over them. For example, if you use their Steam app for 2FA and you need to move it to a new phone your account gets put into a restricted mode and you cannot use the Community Market for 15 days. This restriction also gets applied to any item you touch, so if you trade an item to someone else, the store restriction moves with the item.
They also strong-arm you into using the app. If you log into a new device (or Steam thinks it's a new device since you cleared cookies) and you don't use their app for 2FA, then the device will not be able to trade or use the market for 7 days. They only waive this restriction if you use their app for 2FA and it has been active for at least 7 days.
It's a bit frustrating since the Community Market/Trading is likely only used by a minority of users, but seemingly a ton of login limitations are imposed because of it.
> It's a bit frustrating since the Community Market/Trading is likely only used by a minority of users, but seemingly a ton of login limitations are imposed because of it.
It's probably because it moves a significant amount of money, between trading cards, CSGO knives, TF2 hats, etc. Of course, nothing comparable to banking systems and general-purpose marketplaces, but I personally think those protections only add to the product.
> Of course, nothing comparable to banking systems and general-purpose marketplaces
Rumor is, some MMO games have markets exceeding GDPs of plenty of first-world countries, and ingame items are used by gangs to move funds across borders. Both Cory Doctorow and Neil Stephenson wrote books featuring this phenomenon, and I'm pretty sure they both usually take their ideas from reality.
Since Steam is a Big Guy, and its market is dedicated to this very activity and sits on top of many games at once, I'd guess it to have a sizeable slice.
Doctorow's book is “For The Win” (if I'm not mistaken—really need to get into the habit of writing some notes about the books I read, especially when marathoning through an author's bibliography).
Stephenson's book is “Reamde”, which is a weird, even for him, mix of realistic-sci-fi-about-computers with an adventure thriller.
I think I also found some articles about actual size of virtual economies and the use of them by crime. But those likely went into the Pile To Read, which is a rather sad fate in my case and the hope is thin.
I don't use the Steam 2FA app and when I sell Steam trading card, there's a banner saying 'you haven't used our 2FA, market listing will be held for 7 days'. But then usually the cards I list are sold the same day, I don't really understand why; perhaps because (on the Steam client), I rarely have to log in?
I have the same issue. On Mac, I log in to Steam about once a week (sometimes longer than that). I have to login with my password and get a Steam code almost every time.
Does the Mac version of Steam install through the Mac App store (or is it offered there), and does that store also have the webview restrictions? If that's the case, I'm wondering if that's triggering them to use web login methods, and while I haven't logged into steam on the desktop again since I installed it on this new one ~6 months ago, I have to log into the website and get a code almost every time I want to do something there, so I wonder if the Mac version of Steam is somehow under the web based login restrictions.
LOL, that's true. I wasn't even thinking of what Steam does, and was just considering it's authentication mechanism. Doubly silly of me, since I'm definitely interested and following to some degree the Epic lawsuit.
Huh strange. I’ve used steam on windows, Mac and Linux over the years and with different frequencies of use and still only ever have to manually log in once every few months.
I stopped using Steam because it was too annoying. Not sure what all they get up to for their anticheat crap, but something I do with my network/machine apparently sets them off.
Fighting with bullshit like this is not what I'm looking for when I want a game, so screw it, if a game needs Steam, I don't need the game.
It's not an issue with the app, it's in the browser only. I like using the browser for browsing since tab support is better, bookmarking and also extensions such as Steam Enhanced.
It's a weird variant of TOTP, but you have to be rooted or modify the app in order to extract the secret to use with other apps. Years ago I wrote a script to do it, but I'm not sure if it still works -- it's not really worth doing imo
The issue is that the app uses a custom protocol to confirm Steam community transactions (Steam inventory trades, etc). So if you use an authenticator like AndOTP, you lose the ability to confirm those.
I reverted to e-mail. I only have free software on my phone, and don't regret that choice.
> “Remember the password” barely ever works, even on desktop.
It happens to me only when I keep switching machines (sometimes I play on Linux, sometimes on Windows) => I guess that it's some kind of security check.
If I stick all the time to a single machine then I basically never have to re-login (if I don't stop playing for something like 1 month or longer).
I lost my password, access to my email address and all information that could have been used to identify me like my paypal account. This was somewhere around 2017. It took 2 emails to get them to transfer my account to my new email address and reset my password.
Here's my experience with recovering a Steam account. Some time before I lost access to my Steam account, I bought Portal 2 for the PS3 which included an activation code to get the PC version on steam for free. When I asked Steam support to help me regain access, they asked me to write a specific thing on the flyer with the activation code, scan it and send that in to verify It's my account. After that they helped me recover the account.
Valve introduced Steam and it's security by Gabe announcing his login and password to the world. You couldn't get in though precisely because of the 2FA.
This was a LONG time ago when things being secure on the internet wasn't a given to most people.
> “Remember the password” barely ever works, even on desktop
Is that for browser or client? I had issue with the browser for the past 6 or so years. Every time I bring it up, a few others mention having this issue but not everyone. I think it's an account based issue since it happens on any device I use. It only happens with Steam and no other site.
The 3600 second + a bit (~1s) is a common thing that happens if your periodic scheduling work is one-shot and repeating is achieved through scheduling the task again after completion. E.g. consider the following code (in Kotlin, but hopefully still readable):
if `generateKey` and `publishNewKey` take around ~1s then you'll observe exactly this behaviour - the timestamps will start drifting from some original value.
Scheduling fixed pauses between runs is often better (for jobs where fixed frequency is not critical, which is most of backround jobs) because you then don't have to care whether it will complete within the period.
It is certainly simpler and less fragile than various solutions involving cron-like scheduler and locking (by the way implementation of cron itself is somewhat more complex than it might look like because of exactly this issue).
One thing that somewhat surprised me about typical industrial automation is that running the logic in some variation of do_work(); sleep()/yield(); loop is pretty common (typical modern PLC works that way) and nobody seems to much care about the resulting latency jitter which is from theoretical standpoint totally horrible but in practice insignificant.
Ideally you'd use something modern that invokes your function every hour (cron? :P) so that the rescheduling is detached from the function. I think if generation takes X hours of raw CPU computation where `X >= 1` then as long as you've got C cores and `C > X` you should be OK?
Out of curiosity, why is having the process scheduled tightly with (something akin to) cron ideal to you? atd is, to me, a perfectly reasonable alternative. I guess it depends on the environment a developer finds themselves in when implementing the feature. It might just be easier to setup the next scheduled job than implementing cron-like features in the system that does the executing of scheduled junk.
To me it's not much a stretch of the imagination that this is what they're already doing, and the time between the scheduled task triggering and setting a new one up might take that much to begin with. The whole setup feels like systems with atd scripts that often rescheduled themselves (possibly based on some condition or using intervals with some variability depending on system load or other such state).
If you are wondering why they do this, the answer is not because they don’t trust TLS.
It is (likely) because they use geographically distributed terminating load balancers, perhaps owned by someone else or run in someone else’s POP, and are trying to prevent passive collection of passwords.
I'm assuming it's more that they used to support logging in without SSL, and they just never saw a reason to put in work to change the login to get rid of extra security once HTTPS became mandatory.
Except that without SSL, some JavaScript could be injected to grab the password completely outside of the RSA encryption. So assuming there is already a MITM who wants the password, all you'd be doing is making his attack slightly more complicated.
The passive collection case is presumably enough to justify it. Like, after those folks who were sniffing Facebook and MySpace cookies on unsecured WiFi routers were caught, I can imagine pushing for something like this, "just to force them to single us out and perform an active attack on us, which most passive WiFi sniffers are likely not willing to do," or so.
Another separate derivation would be: “we log every single call. period. I will take on whatever cost to have that oversight of my system.” Well that's problematic, dr. boss, because several calls have PII in them and we want to be careful with how we store that. "OK, we encrypt the PII in-transit so that our logging doesn't have access to it." Well OK but our “log everything” philosophy is now also logging the keys that it was encrypted with, which the client has to fetch. Every call, right? So we are still storing the PII for any hacker to decrypt. "Well, let’s use asymmetric encryption so that this information is not sufficient to decrypt.” OK, but I can still connect information about how you were playing this game at this time, to how you were playing that game at that time. The logs contain that second-order PII that exists in correlations because you use a deterministic process to encrypt. (And at this point the obvious thing to do is a nondeterministic encryption process but you can also just rotate the keys periodically to make this sort of correlation only work over very short timescales.)
Just saying, HTTPS assumes that the problem is insecure channels between secure endpoints when the problem can also be at one or the other endpoint. Like another person said, you might also decrypt right before a load balancer and then route the sensitive data to some other data center because it has lower overall load, etc. etc.
Disagree. Theres a big difference between passive and active collection attacks. If the stakes are a site-wide password breach, it can make sense to eliminate passive attacks specifically.
Remember, sometimes breaches are caused because someone at the POP left HTTP parameter logging enabled.
Support logging without SSL meaning the backend accepts it, not that the login page is ever served like that. Remember that they have desktop and mobile applications for various platforms, where the signing would be part of the application. Although the keys...
> Except that without SSL, some JavaScript could be injected to grab the password completely outside of the RSA encryption. So assuming there is already a MITM who wants the password, all you'd be doing is making his attack slightly more complicated.
An SSL client will cryptographically verify the authenticity of all messages recieved from Valve's servers, so the resulting webpage can't have any Javascript injected by someone without Valve's private key.
Without this kind of authentication, encrypting the connection would be pointless.
the assumption is that the javascript would be injected into the page on its way from steam servers to your browser. ssl would prevent that. i think you're imagining a case where a user has (for example) installed a malicious browser extension. ssl would not help with that.
I guses that works, but it only really prevents surreptitious password collection. If you're in a position to do active attacks (eg. MITM), you can just substitute their public key with your own.
The stakes are “we lost all the user passwords”. This is a problem that can occur if e.g. the POP is logging too many things. Preventing passive collection at POPs also prevents all sorts of footguns like these that can lead to a breach, it is smart security sense IMO.
It also potentially protects the passwords from their web servers if they implement it like that. They can pass the encrypted password to a separate service that has the private key and decides to give you a session or not.
This would also cover intercepting proxies like many corporate networks have and potentially protect against less technical malware that installs MITM proxies on the local computer and root CAs to intercept local traffic (not sure if this is still a common type of malware on Windows computers but I've seen it before)
If you root your phone you can get the secret out and use it with regular TOTP software that implements that variant (I forgot the name - I just know Bitwarden supports it).
Not just that, I just noticed that the Steam app does not support iPhone backups. Blizzard app does, as does the Verisign VIP Access app. Pretty lame. The restore SMS didn’t arrive on time either that evening, fun!
World of Warcraft also uses SRP for logging in via the game client. Or at least they did when it was originally released, I don't know if that's still the case.
One reason not to trust TLS is that it's one system that you want to be secure for all your users - indefinitely. If in the future there was some method to crack the TLS or the appropriate keys/certs were leaked, any recorded traffic could be retroactively cracked.
Remember also it wasn't so long ago we were talking about things such as POODLE attacks. For all we know, some bad implementation of TLS 1.3 could default to some crappy easy to crack algorithm.
I believe there was a paper (can't find it now) that speculated about the cost to crack a specific TLS setup to be about $10 million USD in processing, going back some years. (I think it was in reference to some half of VPN traffic at the time using the same keys.) If Moore's law still applies in any sense, that cost likely halves every two years and people only really change their passwords if they have to.
Another reason is that it reduces risk server side if you are never handling user passwords - at worst an attacker gets a temporary hash that's valid for a short time, specific to that server. Maybe they can do some harm during that time, but you can ultimately revoke that key and undo the changes to the user's accounts.
> If in the future there was some method to crack the TLS or the appropriate keys/certs were leaked, any recorded traffic could be retroactively cracked.
This is incomplete. TLS does allow for ciphers that enable Perfect Forward Secrecy (PFS) to prevent this. Those ciphers are not the most commonly used ones, but to describe TLS the way you do implies it's a flaw in TLS.
> This is incomplete. TLS does allow for ciphers that enable
> Perfect Forward Secrecy (PFS) to prevent this.
Sure, it was simplified. I can't remember exactly what the support was like for PFS? And given it probably requires additional exchange for DH, I imagine it would be disabled due to resources reasons.
There is this study back from 2013 (claimed by OP early days of da internetz) which says that out of 1M top sites, 74.5% of those that support SSL/TLS also supported DH/DHE (supported the perfect forward secrecy).
It was a substantial rise comparing to 2006 survey that got 57.5%.
AFAIK the contemporary browser versions preferred DH/EDH as soon as they got them.
The proposition is that the NSA has a large black budget, and it could plausibly have done the math to unwind DH with the most popular 1024-bit DH primes, and certainly would be able to do this for 512-bit DH.
Nobody does this in 2021. Your browser is using X25519 which is the same concept but with Elliptic curves instead of modular exponentiation of integers.
If Steam were concerned about certain TLS parameters, they could just ensure they never agree the worrying parameters. It wouldn't make any sense to instead bake some other mechanism for login and then trust TLS for everything else.
Yahoo! Mail also does (maybe "did", I looked at it a decade ago) something similar. When the user opens the login page, s/he gets a random string in one of the hidden form fields, IIRC it hashes the user-entered password, and then adds the random string and hashes it again, and sends this result to the backend.
On the backend, it knows the random string it sent to the user, and it has the hashed password in its DB, so it can do the same algorithm and compare the results.
Yeah; I've seen this done elsewhere and facepalmed.
Actually, I've seen this done -worse- elsewhere, where they were actually encrypting the password, using a symmetric key. So if you sniffed the traffic and never loaded the website, I guess, you'd not know the actual password...but you wouldn't need it; it as as good as for the purposes of logging in. If you did load the website, you could still determine what the plaintext password was.
It was really irritating, since I had to figure out what the encryption scheme of a backend app was doing (when I only had access to the frontend code, and the datastore).
Ah, right, I misread the original description of what they were doing.
As is, then, it really is just making the hashed password the new password. If I can get the hashed password out of the DB, I can load the login page and simply skip the initial hashing step that's done on the frontend. I now have access to the account without ever knowing the original password.
That protects against an attacker reading network traffic. It does not protect against an attacker that has the hashed password from the DB.
The only thing you need in order to authenticate at any given time is the hash of (hashed password + nonce). The latter you get for free, at any time, from the server, so you only need to know the hashed password -- not the password itself. Since the hashed password is directly stored in the DB, if you get your hands on that you can authenticate.
Right. My mistake. I should have thought it through more thoroughly before posting. Hopefully my comment doesn't mislead anyone, I'll try to do better in the future.
Regarding the rotation timestamps: My guess is that the rotation is implemented as a "generate a new key when needed" rather than a cron: When someone asks for a key to issue to a user trying to log in, try to fetch a cached one. If one is there and it's < 1 hour old, return it. Otherwise generate a new one and record its timestamp.
Seems awfully complicated, slow and error-prone to use full blown RSA for this. Why not send a challenge token and hashing salt and ask the user to reply with the (effectively one type password) crypto_hash(crypto_hash(salt, password) + token)?
On the backend they already have crypto_hash(salt, password), they know the token they sent so they can build the same hash and see if it matches. This way the backend actually never has access to the non-hashed password.
The only inconvenient I can see is that you can't transparently rehash on login on the backend if you decide to migrate to a different, potentially stronger hash algorithm later. But then again if the worry is that passwords could leak in the backend, using hashes makes it effectively impossible by construction.
I guess nobody gets fired for using RSA. But at the same time doing "serious" crypto in JS always feels icky to me.
That kind of construction involves storing whatever is required for login in plaintext on the backend side. You can get around that limitation by using something like SRP, but that is even more complex than using RSA. On the other hand SRP-like construction would have real security benefits while passing RSA encrypted password over the same channel you got the public key (and implementation of the whole thing) from is of somewhat questionable benefit.
I have wondered many times why we still send the password over the wire (even if in SSL). It should be hashed with a salt every time before being sent! A lot of people reuse their passwords, the user shouldn't trust the website to hash it.
Because calculating the salted hash on the client side will just substitute the hash for password and render the whole hashing useless. Also it would require additional roundtrip to server in order to get the stored salt.
Then there is the UX problem where mechanism like that would have to be implemented on the browser level (and in fact it is as Authorization: digest is mostly what you are proposing) which according to some leads to “ugly and confusing” UI.
> Because calculating the salted hash on the client side will just substitute the hash for password and render the whole hashing useless.
I don't understand what you mean. Just in case I didn't make myself clear, I don't mean substituting the hashing on the server, I mean adding it on the client.
> Also it would require additional roundtrip to server in order to get the stored salt.
It could be salted with some constant/domain name.
> Then there is the UX problem where mechanism like that would have to be implemented on the browser level
What I am saying could perfectly be done with javascript, although I don't see why browsers could not integrate it too.
> which according to some leads to “ugly and confusing” UI.
I am completely lost, I am not sure if you understood me but I don't understand what you mean. Can you explain yourself further or provide a resource that explains this UX problem you're talking about?
The first two points have to do with the fact that if you do the whole process only with symmetric primitives you are only spliting the traditional salted hash into two computations one of which then becomes essentially redundant. If client uses deterministic salt, then whatever you send over the wire is equivalent to password and constant for every authentication. If server sends random challenge which is then hashed by client together with something derived from users password then whatever value that is stored on server in order to check validity of the response is sufficient to fake the response (and also essentially equivalent to plaintext password as it has to be derived from it in some deterministic manner independent of the random salt/challenge). That is to say, the protocol has to be somewhat more complex than that and involve asymetric cryptography to solve both of these issues at once.
There is no technical reason why this could not be implemented in pure JS (and there is lot of things that do something like that with varying complexity and security properties). But there is the question of exactly what is the threat model where the server is trusted to provide implementation of code that defends against compromise of that exact same server.
And as for the UX/UI issue of browser-based security: for security features like this to be truly secure the user has to be sure that he is interacting with the native browser/OS UI and not with something that can be intercepted by JS (or some other untrusted code). This is the reason why various parts of browser UI cannot be hidden, why legitimate permission popups overlap browser toolbar and the idea behind Ctrl-Alt-Del in Windows NT. Such UI by design cannot look as part of your website/application which both confuses even technical users and annoys marketing people because of added friction.
Also what didn't help adoption of any kind of more secure authentication for public facing websites is that in IE the resulting UI was ugly, inconvenient and in some cases downright broken (eg. dialog box used to confirm that you really wanted to use certificate from PKCS#11 token not only didn't say what for, it didn't say anything except “Error! Yes/No”).
> If client uses deterministic salt, then whatever you send over the wire is equivalent to password and constant for every authentication. If server sends random challenge which is then hashed by client together with something derived from users password then whatever value that is stored on server in order to check validity of the response is sufficient to fake the response (and also essentially equivalent to plaintext password as it has to be derived from it in some deterministic manner independent of the random salt/challenge)
You seem to misunderstand the purpose of what I am proposing. I am not proposing this scheme as an alternative to what's proposed in the blogpost.
I am proposing this procedure as an extra protection step for password reuse. If you were to run an exploit on the server that reads its memory contents, you may find a user "Admin" that uses the password "FavoriteFood-DateOfBirth".
If you sent the original version of that password, then such an attacker will find real-world information about you. That information can be used to exploit your identity on other websites. If the password is hashed, then that information doesn't get leaked. Sure, you can fake your authentication to this server, but you have not gained real-world information about your target.
Without salt, the attacker can fake the authentication on every service where that password gets reused. But with salt, every server sees a completely different password on their end. Essentially, your hashed password with salt has become your new password in the server's eyes.
> : for security features like this to be truly secure the user has to be sure that he is interacting with the native browser/OS UI and not with something that can be intercepted by JS
Yes, you were one step further than me on this front. I do agree that communicating this to the user is non-trivial.
Steam and especially CS:GO has a problem with phishing sites (with fake Steam OpenID pages) where attackers (after getting access to the Steam accounts) can automatically create permanent access to accounts by generating API keys to control those phished accounts.
This is used e.g. to swap trade offers in realtime, i.e., a trade offer with the actual account is replaced by a trade offer with a bot with a similar looking profile (all set up automatically).
All of this is done in the timeframe between the user setting up the trade offer and the actual 2FA mobile confirmation of this trade.
People are being phished like this for years and Valve fails to take the responsibility to implement a simple anti automation measure at the part of API key generation (e.g. email confirmation or captcha).
The monetary damage done to users is probably in the high thousands, if not millions, at this point in time.
It's stuff like this that makes me wish for a _version_ of Windows that is completely built & operated by Valve. Or atleast, I wish Microsoft could implement such features for their login interface.
I love gaming on Windows & PC and would love to have the PC have a "Big Picture mode" friendly UI, _throughout_ the OS. Some gimmicks I have had to resort to are to set up my PC Sign-In to be _without_ a password and on a _local account_ on my Win10 PC, along with having Steam start in BigPicture on startup. This way I can switch on my PC and have my controller connect to start gaming just like a console; but way better graphics of course :)
It's these tiny affordances that collectively add up to great User Experience features.
I always had some concerns with desktop apps / mobile apps. Unlike browsers which has a lock symbol how can desktop apps inform whether communication is over tls for example? There is also the challenge that it 'was' tls at one point when you first inspected but could later changed to some non secure transmission without being informed of such change.
What might be some solutions to this? I have yet to see anything that is standardized for this purpose. Other than, loosely here, 2fa token for purpose for login only, but is still without knowing whether transmission to endpoint was over secure channels.
I kept reading waiting for the author to address man in the middle attacks but no mention. This adds no additional security. You can easily provide your own keys or JavaScript and completely bypass this.
Like others have suggested, I get the impression this system is assuming TLS will work and perhaps isn’t trusting the server the password ends up on.
Not sure if anyone noticed, but according to the "login.js" snippet, Steam removes all non-standard ascii characters from the password, before encrypting it...
Doesn't that essentially reduce the password's strength? Especially if there's a lot of non-ascii characters in it....
I would think so, yes. Maybe they are trying to prevent problems with users logging in from computers using different code pages? Is that still a problem?
> That begs the question why. Why bother creating such a weirdly intricate system on top of something that works just fine on its own? I have my own theory, but keep in mind it’s just that.
To avoid admins (or hackers) in enterprise "SSL breaker" boxes from exfiltrating passwords.
How many enterprise admins are trying to get employees' Steam passwords? I think the "it used to support logging in without SSL" theory is more likely.
I wouldn't call it an impossibility, Steam accounts often carry a real, huge amount of value to their owners, going into the thousands of dollars of either games or trade items.
Even if it wasn't the main reason, it probably played a role. Some small time admins in education facilities would probably have an easy time with this stuff and wouldn't get caught doing it.
Steam was always a high target for hackers and a low target for law enforcement (who cares about your stolen games?). It makes a lot of sense for them to implement every security method conceivable.
This strange solution looks like a legacy of times when Steam used HTTP instead of HTTPS. Maybe they just didn't bother to update working code after migration to HTTPS?
Often, you have load balancers that are SSL endpoints, so the data is decrypted at that point.
You can start to see the problem already. What if there is a different bug, and so a dev starts logging requests somewhere down the line? You accidentally start logging cleartext passwords. Oops. Facebook was fined for this not that long ago.
But if the password is encrypted, then it’s not really an issue, and the black box blob can be forwarded to a login microservice. There, the team decrypting will be on higher alert.
So depending on the structure of various teams, you now have fewer teams that need that kind of security oversight and can move faster.
Smaller blast radius of something goes wrong.
[1] https://twitter.com/sroussey/status/1347688753221931010?s=21