If filesystem access is a legitimate concern, you have bigger problems. Even if passwords were secured by FIDO or similar, session tokens are not.
If you compromise a computer, you can compromise web sessions. There is no mitigation for this. Shame on the author for attempting to create panic when far more productive security can be achieved elsewhere.
I’m so sick of these security articles. It’s become a boy who cried wolf situation with endless articles claiming something is critically insecure when the situation involves the attacker basically having end game access and ability to do just about anything anyway.
Compromised sessions is not necessarily a concern, unless the attacker has physical access to the location, since systems can detect if the location changes. E.g. Ip address change prompts the user. Not perfect, and hugely inconvenient for people with dynamic IPs.
AND filesystem access IS a concern, and that is not just if someone has physical access. E.g. You do not know for sure that the code running on your own system was designed with your interests in mind.
Intentionally or by mistake, a password database file could be leaked and broken into if uploaded to a server on the internet outside of your control, and if they also upload your encryption key along with it, goodbye passwords. It could be as simple as a piece of software uploading "telemetry" data, and "accidentally" including the password database from a browser along with the encryption key.
> Compromised sessions is not necessarily a concern, unless the attacker has physical access to the location, since systems can detect if the location changes. E.g. Ip address change prompts the user.
I don't think that's a valid approach anymore. FWIW I have coded these restrictions into auth tokens before (i.e. reject the auth token if it's from a new IP), but had to get rid of it because too many ISPs frequently change a user's IP address, especially mobile clients.
You have misunderstood the threat. When an attacker gains a foothold in a corporate environment, they will immediately try to find any accessible credentials to assist in lateral movement.
If the user's passwords to the rest of the corporate systems are sitting unprotected in a browser password store, it is a gold mine.
Yes, they should have 2fa and single sign on and so on, but many places don't. The article isn't terrible, it's just pointing out something in browsers that works ok for home users but puts businesses at some risk.
> When an attacker gains a foothold in a corporate environment, they will immediately try to find any accessible credentials to assist in lateral movement.
/me not a security expert. But isn't this the mistake I used to make for years: to believe that the hacker is a human, responding to his environment and making decisions? It took me a long time to acknowledge that nearly all network attacks are automated, and unless it's a highly targeted attack, the attack script won't care whether you're a corporation or a couch-potato in a basement.
The big news headlines like wannacry were fully automated. But one-trick ponies. If you had patched you were fine. What made it a problem was that so many hadn't.
But the sinister targeted ones where you only find out because someone is selling terabytes of confidential data, those are usually highly targeted and manual. It's very hard to automate and stay under the radar.
> But the sinister targeted ones where you only find out because someone is selling terabytes of confidential data, those are usually highly targeted and manual.
It doesn't surprise me that "sinister targeted" attacks are also "usually highly targeted".
One thing I think isn't widely appreciated is that insecurity is a highly developed market.
People still have this idea of the lone hooded hacker doing everything from their bedroom.
In reality, people specialise in different aspects of cracking security and sell what they have to someone else. So someone is in the market for a zero day, or a compromised system in the government or a company, and they can just buy that.
For home users, the payoff isn't big enough to be worth more than automated type attacks. So you escape the human in the loop mostly.
It can either be a human directing it once a foothold is established, or an automated attack. Initial compromise may be automated but lateral movement is harder to automate.
If you work on highly sensitive systems then you should expect a human in the loop at some point.
I don't; I'm retired. I have only my home network to fret about. I don't have data to lose, but I don't want some rotter using my network to attack other networks. That rotter isn't going to set up automation to grab my family photos; but he'll use automation to attack other networks.
I've never worked with "highly sensitive systems", as far as I'm aware. I've only ever worked with systems that had the potential to wreck the company. I don't know if that counts, in your book.
Potential to wreck the company counts pretty high in my book!
My own home security is merely adequate. I turn off things like upnp on the router. Disks and backups are encrypted. I don't worry overly much about it. If someone actually targets me it's probably game over, but it's ok against random script kiddies or someone stealing my computers.
>When an attacker gains a foothold in a corporate environment, they will immediately try to find any accessible credentials to assist in lateral movement.
So you think this isn't the case with home users? Maybe I still misunderstand the point that is being made here, but from my perspective it's only a matter of time until my encrypted password store gets exposed to the local attacker (as soon as I unlock it).
I didn't say it wasn't a problem for home users. I said that the browser security model works OK for home users who aren't at all bothered by security unless it gets in their way, in which case they will switch to a product that doesn't. It's poor security but probably the best we can do by default.
So that default browser behaviour creates a risk that a business should acknowledge and assess.
A home user can of course also decide it's too risky, or that password managers are too risky and only a yubikey will do.
Well, better than nothing. Most places I've worked do not have ubiquitous 2fa. And it's mostly just a gate to gain access, rather than something required to maintain access.
A hijacked session is bad, but nowadays not nearly as bad as a leaked password:
- Sessions can be linked to a user's location and/or browser finger print
- Sessions are short(er) lived
- Sessions can easily get invalidated (e.g. device wide logout)
- Almost all critical actions are behind additional security (e.g. can't change password without 2FA or change billing information without confirming password and/or 2FA in order to apply changes, etc.)
- Sessions are not shared across properties, whereas many users share their password across multiple internet sites/properties
As usual with security discussions, one needs to start from analyzing security threats and attack vectors. Is a simple to memorize and likely multi-use password is a bigger security threat than a unique, hard to guess passwords in a file storage? It depends.
Is this a laptop without disk encryption that travels a lot and especially internationally? Sure, these semi-unencrypted passwords on disk are likely not very safe from lost laptops, customs inspections, etc. Might still be better than a common and simple password though.
Is this a laptop sitting at home most of the time with a strong disk encryption? I’ll take unsecured browser password storage with unique hard passwords any day.
Even Windows Home SKUs have been trying to gently move casual home users (which today are most likely to be using laptops) to FDE in 2022. It's one of the reasons for the Microsoft Account requirement that is much maligned to a sector of HN commenters. A trade off to using Microsoft Accounts for login is that Microsoft knows you have a recovery path for FDE keys and can enable FDE on your behalf, which in isolation is a good idea for casual home users. (Whatever you think about the other implications of needing a Microsoft Account for a personal casual use device.)
This is a bad bad article, the advice is dated and the counter-arguments are well known and oft-discussed by anyone who's actually in the security community.
The author / website does seem to be offering services in the security industry, but they seem compliance-focused rather than security-focused (compliance is a component of security). So likely offering legal & administrative expertise rather than technical.
On reflection, I think this article is why I find it extremely difficult to hire qualified security experts.
The vast majority of "security experts" I find tend to be "box tickers", who can require lots of rules like this ("don't use the built in password manager"), but whose advice is worse than useless because they don't understand the actual threat models.
I know great security people exist, but in my experience I wasn't able to find them, and instead just decided to learn the most important things myself (I'm an application engineer who felt I had a good grasp of things around application security, but less so around infrastructure and corporate IT security).
I feel that it's also very hard for people who understand security to put themselves out there. For people who actually care about security and not just compliance, there's no real job market at all. Companies never hire for this kind of role.
So I think they're hard to find because there's no real market for them, there's little point in announcing themselves and their expertise since there's really [almost] nobody interested in hiring that.
> For people who actually care about security and not just compliance, there's no real job market at all. Companies never hire for this kind of role.
Can't agree more! This is the sad truth as you say.
This is the space I work in and to find a team who actually cares about building a secure product against reasoned threat models is as rare as winning the lottery.
Roles with great pay are easy to find in the policy and process side of security though.
Their site seems focused on CISOs which in my experience lack serious technical skills.
In fairness this is part of their job, but it is sometimes annoying when they refuse to recognise real risks and just focus on stuff that's just show. Like bitsight scores. In our case they don't reflect our actual environment at all (they don't recognise 99% of our traffic), but the CISOs love bitsight because our score is an A.
It would also be great if someone could get the author a Twitter account. I usually hate twitter but this seems like a 2000+ word article that could have been stated in a couple of tweets.
> Note – many of these dedicated password managers have browser plugins or extensions to help users save and fill passwords. These are very different and much more secure than the built-in password managers that are the subject of this article!
This is a shitty article from someone who doesn't really know what he's talking about. Here is a post from Tavis Ormandy, well-known security expert at Google Project Zero, advocating the exact opposite: https://lock.cmpxchg8b.com/passmgrs.html
Although I trust Tavis Ormandy more than this random blog post, I disagree with the idea that the password managers built into your browser are somehow superior.
I use Bitwarden and there's simply no comparison between what Firefox/Chrome offer me and what Bitwarden offers; you can't even add an extra field to the browser password manager and Google helpfully "encrypts" your data with the password they're already receiving when you're setting up their browser. You can change that, of course, but like a router's default password, if you don't have to change it, people won't. There are also other UI/UX problems (had a side move TLDs, good luck fixing that in your browser!) that browsers don't seem to care about for the sake of "simplicity".
The dangers that come with external password managers are because of a lack of good password manager APIs in browsers, forcing them to break the security model. It's easy to say "don't use them, they break some design", especially if you work for a company that designs their own browser, but there are quite tangible benefits to taking the risks, the most important of which is probably "not handing over your data to some browser giant".
Re: this article: if the attacker has access to the browser password database, they have access to cookies and the ability to monitor key strokes. You can encrypt passwords all you want but the attacker can still move laterally between services by just copying your session cookie and hitting the next vulnerable target. Credentials will come next time the user logs in. Thinking about secure passwords is important, but there's a bigger picture that needs to be accounted for.
The writer of this article mostly seems interested in ticking boxes based on how much they hammer on writing policy and enacting policy and talking to people about policy. I'm not sure who the target audience for this blog is, I would guess managers who are looking to improve their company's security?
FWIW, I agree with everything you've written. My point is that it's very clear to see that Tavis Ormandy understands at a very deep level the tradeoffs and threat vectors of browser-implemented password managers vs. password-manager plugins. You may not come to the same conclusion he does, but Ormandy's position is not due to a lack of understanding
That is in contrast with this blog post author who fundamentally doesn't understand what he's talking about.
> I disagree with the idea that the password managers built into your browser are somehow superior.
I use Password Safe. It doesn't integrate with my browser, and the database is stored locally. So I'm exposed to no threat from the subscription company; and there's no content script. It's easy to back-up the database. I can store the database and the Password Safe program on a memory stick. And I can use the password manager with any browser.
What do I lose? I lose auto-filling of login forms. That's it. Instead I have to copy-paste a password, something that happens 4-5 times a day.
I'm planning to switch to pass, for a few reasons (notably that Password Safe is Windows-only), but I'm dragging my feet.
Pass is amazing. Along with yubikeys to do the encryption and "touch to sign" to avoid agent exploitation (so you need to touch it for every use) it's pretty safe too. There's good browser plugins available.
It's a bit of a pain to set it all up on every OS. Especially on Windows it feels far from native. But it works well.
I don't like copy/pasting passwords. Many computers have a clipboard log these days and you never know what program is polling for your clipboard contents. Mobile apps used to do this all the time and I don't believe for a second that shitty trackers and other such online stalkerware doesn't try to get the contents of your clipboard.
I use an integrated browser mostly so I don't need to go through my clipboard, even though Bitwarden has a setting to auto clear the clipboard after copying a password.
A nice feature that some local password managers like KeePass have is that you can auto type them, so the actual password doesn't end up in the clipboard.
Of course, personally I'm not a fan of the idea of the command being interpreted incorrectly or messing up window switching so that my password would be sent to the wrong application.
That's bad article too. It talks about 1 bad "feature" of specific password manager extension and then dismisses every other password manager extension without verifying if they have this bad "feature" too. And this is from security expert ? Bruh
> It talks about 1 bad "feature" of specific password manager extension and then dismisses every other password manager extension without verifying if they have this bad "feature" too.
He specifically says:
> All online password managers work in a similar way, this isn’t specific to any one implementation.
And they do - because that's the integration point they have to use. It's just how the architecture of browsers works.
> And this is from security expert ? Bruh
Not sure what this is supposed to mean but I'll take it as dismissal. Assuming that is the case you should rethink how you assess information because Ormandy knows what he is talking about.
Whoever wrote this seems to have no modern security training.
> then your average employee is probably doing one of these three things: Writing passwords down on paper
> Hopefully, you have a corporate security awareness training program and have long been discouraging
Please, please encourage people to write down passwords on paper! That provides really good safety against most modern threat models, especially in a world where people are working from home.
> Even though Chrome, Firefox, and Edge browsers all store passwords in encrypted databases, by default all three products intentionally leave the associated encryption keys completely unprotected in predictable locations.
There was (is?) a long lived Chrome issue (which I can't find now). They reasonably make the point that operating system level protection is the correct way to protect this (ie, if a person can log onto your device they are assumed to be you).
Even if the user doesn't turn on a master password, having the key in a predictable place on an encrypted volume with appropriate access permissions is still far more secure than sticky notes on the monitor. Contrary to the OP link's statement, it isn't enough for the attacker to get access to the user's system, they have to get access to the user's account.
And if the organization in question isn't using Bitlocker or FileVault or some other encryption, browser password stores are way down the list of security worries.
I'm sorry, but this view is fundamentally incorrect.
You have to consider what the actual threat model is. The reality is that your primary threat model is not going to be one employee compromising another's, nor is it someone malicious physically tracking down your employees and stealing and/or physically compromising their devices.
Those threats do technically exist, but your employees have to be extraordinarily valuable to warrant the surveillance required to use physical access to compromise them/their account. It is vastly more likely that the attack vector is going to be spam/scam/phishing leading to malicious software installed on the victim's machine. Notably in regard to the post-it/notebook you claimed was the worst option, computers generally can't read post it notes attached to the monitor.
Similarly bit locker and FileVault are obviously important, but do nothing at all to stop a remote attacker, because the only threat model that they protect against is physical access to the hardware. If a remote attacker has got some malicious software installed, then by definition once it's running all the drive content is available to the attacker's software (technically SIP should limit this on macOS, but I'll assume that an attacker also compromises SIP).
But it is extremely important to understand that in none of these cases did the attacker need to get "access to the user's account" - that was simply a byproduct of their primary attack.
Given the comment I was replying to that was considering "access to the user's account" being the unlikely case, it seemed reasonable to go into very clear detail on what the real world attack vector is.
That said, phishing does not necessarily mean gaining access to an account. The typical path is the low cost "trick someone into providing their credentials to a malicious party" - something browser based password managers do well, as they are very careful in ensuring they only ever enter passwords into the correct domain. You can also phish someone who is very careful by not asking for their account credentials, but having them do something innocuous that compromises their machine. For example you can have a fake company interview them and provide them a PDF with the job offer, and then have the PDF install malware.
But the important thing is that the primary attack vector for any device is remote access. So things like disk encryption don't matter, and things like password notebooks are not accessible.
The real problem is that once you have malware running on your machine, the presumption is that malware interested in credentials is going to elevate itself to super user at least and so generally gain code injection abilities in most processes that it could be interested in (e.g. all the password managers, or the system pasteboard, or whatever mechanism whichever the target password manager uses). Modern protections such as aggressive code-signing enforcement, sandboxing, SIP (or the windows equivalent), etc make such compromises harder, but harder != impossible.
If your home directory is readable or writable by anyone other than you, you're compromised in a dozen more important ways, even though this is of very high importance. Your home directory's security is an axiom. It's not true, no, in the same way that there were once upon a time remote exploitable worms in major http frameworks but we don't question that a webserver doesn't allow remote-code-execution - it's the wrong layer of abstraction.
Even encrypting the browser's data store at the application level is misguided and probably pointless - Anything that can read the browser's files is going to read the encryption key just as well . Anything that can read the browser's files when running as you is probably going to pop up an identical looking "Enter Password" prompt and will have the right timing and permissions to enter it into the browser once it's been leaked. Gui frameworks are not designed to protect the user from malicious applications.
Android actually handles this much better - Applications (Rather, developers) are given their own user id, and so separation of files between apps is enforced at the OS level. Some level of this is why everyone has moved to Docker on the server, too.
This is a bunch of silly hand-wringing. I guarantee that if browsers required creating and memorizing and typing a master password all the time, users would be less secure overall. Because people simply wouldn't use the annoying password manager. Using a password manager without a master password is way more secure than not using a password manager at all.
If you are a business and you want your employees to be secure, forget about anything to do with passwords. You need hardware second factor tokens. Which I notice the article doesn't mention at all. An article about login security in 2022 that doesn't even mention hardware tokens for two factor authentication is not worth anyone's time.
The recent targeted phishing attacks against Twilio and Cloudflare made this abundantly clear. Twilio didn't use hardware tokens and was hacked. Cloudflare reported they had 3 users who entered their username and password into the phishing site, but since all of their employees use hardware keys, they weren't hacked.
Be careful what you wish for. It's happening but not in the way you're envisioning. A lot of logins now require a phone app. That's the hardware offloading, and reduces overhead of having to manage dedicated hardware. Instead, users manage it themselves and the business piggybacks off it.
> Using a password manager without a master password is way more secure than not using a password manager at all.
I disagree, diceware AND hardware security keys, is stronger, no password manager needed at all.
However I agree the article is overblown. For most people, browser based password managers are probably a vast improvement, since most will likely never educate themselves on such things or accept the solutions as personally viable. The browser is right there and even prompts them into using it.
Because they are significantly more memorable than pw made from individually random chars without significantly reducing their strength.
I have a handful of diceware passwords for important accounts i need to remember, and find that pretty easy... the rest are throwaway level unimportant or I leave to email based login.
However I dictate password and security policies where I work, and don't have to adhere to stupid "change your pw every week" rules - so i realise this is not a solution for everyone working under different (stupid) conditions.
I have about 200ish passwords in my password manager. No matter how memorable any of those is individually, remembering 200 secure secrets is not practical. Nevermind the fact that many of those are used extremely rarely, good luck rembering diceware password that you use once in couple of years
That's fair, but I'd ask yourself: of those 200 how many are important enough to warrant a strong password? and of those how many are important and frequently used enough to need to be independent of email based authentication? Perhaps you don't need to remember as many as you think.
Safari already gates password autofill on the secure element (or enclave, I can never remember the difference) when such is available, which is what you're suggesting :D
That said you're absolutely correct, making autofill harder to use would mean people wouldn't use it, and would revert to predictable and reused passwords, and if you're a business you should be using token based authentication be they dongles or phone and PC's secure elements.
> you should be using token based authentication be they dongles or phone and PC's secure elements.
Phone? Really? I regard my smartphone as the least-secure piece of computing equipment in my posession. I certainly don't trust it to secure anything. A phone isn't a token; it happens to be "something you own", so some sites treat a phone as if it were a security token. This is a problem I encounter mainly on UK government sites (sites that I more-or-less have to use). These sites don't offer support for alternative hardware "tokens", nor for people who don't own a smartphone.
I had assumed that at this point all android phones have got some equivalent to the secure element present in all apple products produced in the last 5+ years, if they don't that's bananas.
Assuming that, a phone is likely one of the most secure devices that you own.
Having a "secure element", enclave or whatever, doesn't mean that some rando TPA app is using it. I'm not qualified to inspect the source-code of Android apps. As far as I'm conceerned, these enclaves aren't for my protection; they're there to protect the interests of the device's owners (which isn't me).
A smartphone OS runs under the supervision of another OS, which is proprietary - source-code not supplied. A lot of the hardware is also proprietary, each component having its own (opaque) firmware blob. The device has telephony and internet connectivity. The connectivity providers behave as if they own the device, install software without warning, and surreptitiously inject content.
If I'm not competent to analyse the device myself (and I'm not), then I'm dependent on third-parties to confirm the device is secure. How many smartphone reviews have you read recently that focused on security, rather than how shiny the device is, how good the cameras are, or how good the screen is? I don't recall ever seeing a review that paid any attention to security.
These are opaque devices, with a huge attack surface. It makes no sense to me to describe it as "one of the most secure devices that you own".
I agree on the TPA not using the secure element on the phone is an issue, but I wasn't addressing low quality authenticator apps, but for apps or for tokens you're assuming that the developers haven't cheaped out.
> A smartphone OS ...
Yes, and the same applies to tokens albeit with simpler logic, and the same applies to any PC. So at some point you're saying "I trust that the company producing the product I'm using is not lying in their security documentation".
> .. Reviews ...
Correct, because reviewers are talking about feature set, competent phone companies provide extensive documentation of the security architecture of the devices. Security researchers periodically write up their investigations.
> These are opaque devices, with a huge attack surface.
No, by the standards you have presented all your devices are opaque, but unlike every other device you own the default security model of a well designed phone is far stronger than any other device you have. They don't run arbitrary code, they don't support loading code into kernel space, user partitions are completely separated from the read only system partitions, etc.
I'd take a maximum transparency keyfob-type device. By "maximum transparency", I mean I'd have to be able to assemble it myself, from COTS components, and understand any software it runs. But I have no idea how to make such a thing (I've tinkered a bit with microcontrollers and TTL).
The closest I'm aware of is the fob my bank gave me; it has a number-pad and a display, you read a challenge off the screen, and trype it into the keypad. It displays numeric response, and you type it into the website form.
That seems cool, if I could build the device myself, or even know what it does.
I don't think there is anything wrong with storing passwords unencrypted locally assuming the machine itself has encrypted storage. Malware that retrieves passwords from password manager could get them from an unlocked password manager as well.
Reading a file and code execution (which is required for reading another processes memory) are two different levels of vulnerabilities.
Essentially if you have a piece a software with a bug that allows someone to remotely read files from you system, your browser stored passwords are compromised while your passwords stored encrypted are safe-ish depending on how good your passphrase is.
Of course if you have malware that can run arbitrary code on your system you are hosed either way.
Encrypting your mass storage devices doesn't protect against malicious code as by definition the filesystem is unlocked once you're logged in. Mass storage encryption is primarily protection against physical access.
A lot of the criticism of this article seems to be: “If they already have access to your local file system, you already have bigger problems”
What about defence in depth?
This article is suggesting an alternative, which are password managers such as 1Password. These Password managers do not suffer from the same weak key storage as the browser’s build-in password managers.
So this article is bringing attention to a weakness in the browser’s built-in password managers, and suggesting a very viable and easy-to-adopt solution.
The problem is that the article takes on this hyperbole-laden all-or-nothing tone that does nobody any favours.
If you go along with the all-or-nothing mentality, then local file system access is pretty much game over anyway.
If you want to take the security-in-depth approach, you have to first apply it to these password managers and take an honest look at the problems they solve. And it turns out they’re amazing from a cost:benefit benefit perspective.
Put differently: go read the spectre/meltdown papers. Imagine if _those_ were written in the same tone this is. That’s the problem.
The article states that there is a big problem, but the solution it gives only incrementally improves the situation.
If passwords stored in plaintext is a problem, don’t just use slightly harder to access storage. Use SSO so there are not credentials to steal.
If the article gave a complete picture of what to do to mitigate the damage of endpoint compromise or was less alarmist in its assessment of risk, I would have liked it better.
That's a fair assessment of the article I think. It's a genuine threat, and one we have had to deal with in a small way before.
Most of the criticism I've read here seems to be dismissing the threat entirely, using the weak "if that happened you've got bigger problems" argument.
> So this article is bringing attention to a weakness in the browser’s built-in password managers, and suggesting a very viable and easy-to-adopt solution.
Because many actual experts disagree it is a weakness
> Why the strong criticism of this article?
The advice tries to make it out like browser suppliers are doing this to lower security for some unknown reason, whereas actually their model is safer than what is recommended.
It is possible to argue against browser suppliers here, but you need to look at their arguments for doing it that way. This article doesn't do that.
These articles always assume everyone has the same threat model: you are head of the NSA's IT department, and hostile nation-states are spending billions to attack your security with everything from 1024-qubit computers to $5 pipe wrenches.
Forcing users to employ the same security tools and practices that would be appropriate for dealing with far more serious threats is just annoying, and likely to result in passive-aggressive resistance.
I remember there being a lot of buzz about security issues related to allowing Browsers storing passwords, this was more than 10 years ago, but ever since that I have just not trusted them. I reluctantly use a open source password manager, keepass, and figure it is still better than using the same password everywhere.
Why would we store password in the browser? Seriously. I want my passwords to be available wherever I need to use them, and that only happens if I somehow share the passwords between my devices. I would not trust a proprietary browser developer do store my passwords securely. Period. I have no way of seeing or knowing what is going on on their cloud servers.
There are very simple ways to sync files between systems, which are open source, and are much more unlikely to compromise your passwords. E.g. The database itself is encrypted, and the methods of sharing are so simple that it is easy to cover many of the most probable points of entry. Obviously, sharing a password database file over the internet is extremely bad, but if you feel you must, do at least manage the server where you keep the pw db yourself. Heck, I would even 7zip it with another layer of security, because I can not know for sure if Keepass' encryption is safe.
There is a lot of backlash against this article, which to be fair is kind of poorly written, but still makes a valid point. Encrypting something and writing the key on the same place is pointless.
These browsers encrypting the passwords with a key saved on the device is security theater and has to be called out.
Security is not all or nothing, most people don't have full disk encryption so their passwords are sitting there completely unencrypted, trivially retrieved from anyone with physical access.
> makes a valid point. Encrypting something and writing the key on the same place is pointless.
The backlash is due to this not being a valid point. Security best practice has long moved on from dogmatic binaries and treating humans like robots.
Encrypting something in transit and writing the key in plaintext locally, while not ideal, is far from pointless. Building perfectly secure systems that nobody will use is what's pointless.
> trivially retrieved from anyone with physical access
Pretty bold using the word "trivial" here: physical access is not the primary threat model for average password manager use.
It's not in transit, it's the equivalent of storing your passwords in plaintext, which I don't think can be actually defended as a good practice.
Regardless the question that arises which is a lot more worrisome is what happens when you turn on synchronization on these browsers? Do they encrypt the passwords with a key only you know? Or do they just ship the "encrypted" passwords along with the key, so that your passwords are now essentially plaintext on their servers?
> which I don't think can be actually defended as a good practice
Allow me to try defending it:
Even storing your passwords in plaintext locally using an integrated password manager would:
1. discourage password reuse
2. usually encourage generated strong passwords/phrases (best password is one you don't know)
3. allow easy automated password auditing
4. prevent phishing-based credential capture via domain matching
Just point (1) above is a much greater threat than the likelihood of physical access (or rootkit/malware in which case key logging would invalidate keyed encryption either way)
> what happens when you turn on synchronization on these browsers? [...] Or do they just ship the "encrypted" passwords along with the key, so that your passwords are now essentially plaintext on their servers?
The typical approach is to ship the encrypted password without the key: provided it's a key known by the user, this makes it easy to download the encrypted credentials on any device and decrypt locally. If it's a generated key unknown to the user, there's a few different strategies to adding a second client - more complex but doable.
Sure, encrypting the passwords is better, but to steal all the stored unencrypted password the attacker would need access to the users computer.
If they have that access you have bigger problems on your hand. Yes, this can lead to privilege escalation, but for the vast majority of people access to the desktop is enough for that anyway.
If you need better security, you probably already are using more advanced measures.
One thing i often do when i forget one of my passwords, is go into chrome, goto the webpage corresponding to the login of the thing in question and let it autofill, then i turn the password area to a plain text area in the HTML editor.
I've always thought this kind of bypasses most checks you get if you try to go into the password db in the browser it self.
The new Hell I'm experiencing is everyone wanting to validate my identity through my phone. Email does it, banking does it, I suspect by the end of the year Windows will probably be sending me a code before I can log in. I'm sick of it. I don't like needing to have my phone on me, I don't like the fear that if I lose my phone I'll be locked out of everything, and I really don't like being forced into this.
It feels like there's a lot of fear around passwords right now. I'm sure companies see them as a liability and are eager to move away from them as soon as possible. Are we going to have a future where each person (or identity) has a single hardware token for all logins? I don't think we're anywhere close to that yet.
SMS TOTP is indeed bad and your suspicion is well warranted.
Its a lazy way to implement 2-factor authentication and exposes the user to MITM attacks as well as a host of other nastiness.
U2F (stuff like what Google Authenticator does) is way better and less phone dependent. The only reason a team would opt for TOTP if they had the resources to implement U2F is because its a good way to get your phone number.
Edit: embarrassingly I've made an error in my use of acronyms. What I refer to as TOTP is in fact plaintext OTP sent via SMS, and what I refer to as U2F is actually app-based TOTP. Apologies!
You seem to be mixing and matching acronyms that don't really make sense.
"TOTP" means "Time-based one time password". It's where the server and client (like the Google Authenticator app) share a secret and then the one-time password changes every 30 seconds. "SMS TOTP" doesn't make sense because SMS codes can be anything random that is sent out to the user attempting to log in.
As I stated above, the Google Authenticator app is a TOTP implementation and is also susceptible to MITM attacks.
U2F refers to physical hardware keys (like yubikeys) that cryptographically validate the requesting server is who they say they are, and is not susceptible to phishing.
You are completely right, I misspoke (mistyped?) in my previous comment. Apologies for the error.
What I referred to as U2F is TOTP as you say, and what I referred to as TOTP is plaintext OTPs. That's an embarrassing mistake.
What I had meant to highlight is that SMS based authentication is more vulnerable than an authenticator app because the SMS exposes data that can be intercepted or collected by a third party. A timed algorithm solution does not expose this risk at read time (only when synchronising - unless I've missed something?), which is why I strongly prefer it.
This is a modest but often overblown risk. SIM swapping is easy but scales badly. SMS interception malware exists but isn’t widely installed and you rely on the malware not being able to root your phone or get accessibility access to your screen.
But most importantly, both TOTP and SMS are vulnerable to phishing - by far the most common method of stealing second factors.
I disagree here, suspicion is not warranted - I'd prefer that they support additional non-sms 2FA paths, but supporting SMS based 2FA has many benefits for a company beyond being "lazy"
SMS is vastly better than nothing, and it has the benefit of not requiring users install random software that they (a) don't know how to install, (b) don't know how to use, (c) don't have a recent enough device to use, (d) you don't need to worry about them deleting the app and then losing their 2f, (e) doesn't require your customers having an account with (and potentially paying) another company just to log in to your site, etc
I really do think that tech people over estimate the proportion of people for whom at least one of the above applies. Not everyone runs a one or two year old phone, many of the very cheap phones run very old versions of android, can't be updated, etc
Obviously SMS 2FA has a different problem as the sufficiently poor may not have a constant phone number, obviously sim hijacks can happen but the targeting requires much more effort than simply moving on to the next non-2fa account.
[edit: updated to make sentence structure not look like a series of wordle guesses]
Huh, you're correct those apps have much longer support than I expected - but did not check :( - from free apps (though I guess Google Authenticator is backed a small immigrant business :D ), thanks for pointing out my error, I'd fix my comment but can no longer edit things.
What I was trying to say there are a _lot_ of very old smart phones in poor communities, and they're still in use. I searched craigslist in very poor parts of the US and I can find quite a few phones that are running android <= 4, and have fingerprints implying they were in use. I would guess that that end of the spectrum has many more offline only transactions. It's also a community who may not have internet access, and may not have library access, which is how people in larger cities can access the internet (from volunteering I discovered sometimes to watch porn :-/).
Not intended as a correction to your comment, but just as a backup for my statement that plenty of phones still can't use TOTP apps, the cheapest phones with t-mobile are not smartphones, and the cheapest ones on amazon are not running a recent enough android, and some even run Windows OS?!?!?!
I recently had to tell my bank, with my voice over the phone, that the make and model of my first car were the three random words 1Password generated for me. “Yes, the make and model of my first car was… a Venerated Breakfast Platoon.”
Little-to-nothing to do with liability. Your identity = $. Phone-validated users are simply higher ARPU and (somewhat less importantly) lower risk - an artefact of gating access behind harder-to-spoof touch points.
To the extent that a hardware token proffers anon/pseudonymous verification, there'll be pushback from industry. Because again, your identity = $. Expect anything that would plausibly get a revenue bump from verifying identity to force you to do so eventually. Because you know, security.
I use GV, but it's really tough to use with many sites. Many sites will detect the non-cellphone (virtual) nature of the number and require a different (cellphone backed) number.
Yesterday, I put my GV number in the DALL·E sign up and it wouldn't let me proceed. So I abandoned the sign up flow.
Yes, or go with their hosting, and the password manager design allows you to encrypt your data so even they can't get access to it.
Obviously this is quite the honeypot, so people will be trying to attack it in the general case. Browser bugs, JS bugs in extensions and such like are a risk.
If you want to be more paranoid (not a bad thing) you might need to do away with a password manager in the browser, and use an independent program like KeePass. More paranoid and you would run that program on it's own separate physical device.
There is a balance between security issues trusting someone else, and security issues rolling your own and screwing it up though.
For example is KeePass with password only less/more secure than LastPass with encryption key and password.
It shouldn't be a "and the design means that they can't access it" that should be the only behavior?
I agree with your other points
Assuming the LastPass encryption key is a separate token I would say LastPass wins, but that is solely assuming similar architecture, obviously you can make design decisions that mess up everything.
Or you could report this as a security bug on those browsers.
This vulnerability does not exist in Safari on any platform: macOS, iOS, or windows. Admittedly in the last case because it is alas dead :D
I would have assumed that on macOS Firefox and Chrome use the platform APIs that support secure storage of data, and would absolutely consider this to be a security bug if not.
Lot of critical reactions, maybe deserved. Some saying master password is not all that important. I notice that with Firefox Sync that means knowing the unlock swipe of your Android is all that's needed to view passwords in plain text via the Settings UI when you lose your phone or someone peeks in it on an unguarded moment.
I'm just not sure why they haven't built in something like Apple's face ID. I know security experts hate anything that isn't locked down with 3 forms of verification but my goal is to have security to the point that it isn't a hastle every time I want to go to a new website.
Really? I would have assumed that they do what keychain on macOS does which is password gate the decryption key, so it's good to know they're just sitting on disk somewhere. It's a shame as half assed stuff like this is why people don't trust these tools :-/
Read above. Skeptical, misunderstood the threat/risk surface, lack of coherent adoption of modern NIST expectations, focussed on the one story they know, which is the password manager keys are at risk if the machine is compromised.
Lastpass just sent us a bill for $750 and unless we pay it they locked the whole company out of out shared password database and refuse to supply even chat or email support to discuss it
Lastpass was acquired by some shady chinese company. It's time to migrate to some reliable open source manager like Bitwarden (also provides cloud sync)
No it wasn't. LastPass is (soon to be was) owned by LogMeIn - now GoTo - which was originally a Hungarian company and is now owned by an American investment firm. LastPass is being spun off into a standalone company again.
If you compromise a computer, you can compromise web sessions. There is no mitigation for this. Shame on the author for attempting to create panic when far more productive security can be achieved elsewhere.