> is their actually a technical reason to believe they are better than Lastpass or any of their competition (have they like open sourced all their stuff?).
I choose to use their clients unmodified, along with an instance of the server formerly known as "bitwarden_rs" running in my basement as the sync backend.
https://github.com/dani-garcia/vaultwarden
I still pay them annually for their "freemium" features even though I prefer not to let them host my data.
Do you expose your server to the internet or is it ok to sync devices only when you’re at home? Is every device a replica, if you lose your server can you redeploy it from the data on your device?
Not the parent, but I have been hosting a Vaultwarden instance on the public internet for about two years now.
After learning about certificate transparency logs, I moved the app from a raw subdomain behind a secret URL path. Think “hello.domain.com/correcthorsebatterystaple”.
Is it security by obscurity? You bet. Does it work? Yes. I regularly evaluate the JSON logs emitted by Caddy in a pandas script and so far, no foreign party has even hit that endpoint.
It’s like an extra username of sorts you’d have to know. I’ve always been unsure of where to draw the line when it comes to obscurity. People online are viciously against it, but isn’t a password also just obscurity, if you squint your eyes real good? It’s all secrets users would need to know.
All that being said, I’m thinking of hosting it at-home-only as well. Would be a huge win in security and barely any loss in convenience.
I’d say the things hitting your endpoint are going to be entirely automated scanners that are looking for unauthenticated resources or low hanging common passwords. If you have even a moderately strong password, it’s just noise. I’d be wary about drawing any significant conclusions from logs, because the sophistication of attackers you’ve excluded are quite low.
I’d say defense-in-depth is more about nesting strong cryptographic primitives, than simply adding layers. What you’re trading off for is complexity and convenience vs security. In the URL case, a password is more secure (and treated as such) and lots of care is usually taken to make sure the hashing scheme is timing resistant etc. I don’t know if Caddy makes equivalent guarantees, but I’d be very surprised if path matching was not just a string match/regex/trie. In terms of time to crack, just prepending these characters to the password would give you more protection, because that then has to go through a resource intensive hashing process.
An example of defense-in-depth would be to host at home only. Here, it’s because you’re nesting actual isolation (which is a good security primitive by itself), with a strong password. This gives you protection even if your threat model is “caddy is borked and is letting anyone do anything”.
Now in reality, you can do just about anything and it’ll work (because in the grand scheme, you’re probably not a high value enough target for any high cost attacks). If you secretly happen to be, then you can afford an actual security audit, rather than relying on random info from HN :)
> I’d say defense-in-depth is more about nesting strong cryptographic primitives, than simply adding layers.
I like this insight, thank you.
One rebuttal I have: appending those characters to the password would make it a stronger password, but it wouldn’t add another, wholly different, mode to authentication. It would be the same thing, just harder (and I don’t need a longer password as it stands). What if this mode is flawed in itself? That’s when a wholly different one is desirable.
In that spirit, I had also thought about just slamming http basic auth in front of everything. Even if that basic auth uses weak credentials, it adds to security in a multiplicative/exponential way (multiple passwords/systems), over just a linear one (single but long password). I suppose that’s also what you mean by layering.
Linear/multiplicative stuff is actually quite helpful for discussing the path thing.
Adding a “password” path is actually only increasing your security by a constant factor per character because of the risk of timing attacks (unless you are sure that the path matching algorithm is secure against it now and in the future). Ideally a 2nd layer would guard against it in a multiplicative way (an entirely different with system).
Cryptographically, adding characters to the password rather than to the path is better (because it increases the search space exponentially) than adding characters to the path, which can be brute forced separately per character.
But this assumes a very perfect view of software, where there are no bugs. Once you add a risk model for bugs, then there might be small values of path length where the additional constant factor is better than the multiplicative one that you get from adding characters to your password. So your rebuttal holds, depending on the exact bug risk model you have.
I think nowadays, tailscale/wireguard is really convenient and pretty secure as a 2nd layer. I was averse to self hosting my password manager in the past due to not being comfortable having the consistent time to secure more critical applications, but I might actually move to a world where I host more critical things myself behind a VPN.
I had a at-home-only version once. Then I failed to unlock my vault on my iPhone (FaceID issue or something) and it refused to allow me to enter the master key without first passing the 2FA check with the server (did it delete the local vault or something?!). I had to go home to fix it.
I’d recommend ensuring you have some sort of VPN solution so that you can access your vault away from home, too.
Personally, I just decided to use the 1st party server. I realized that reliable access to my vault is a service I really don’t want to be without due to technical issues in my setup.
I currently have my server at home connected via wireguard to a VPS. On that VPS, I run Caddy and have it reverse proxy back to my server over wireguard.
If I were building it out today I might just use tailscale and be done with it.
I'm not sure about whether every device is a replica of the server. I believe that's the case (given how they behave when the server is offline) but that doesn't figure into my recovery plan.
But in the case of the mobile apps, downloaded from their respective platform's app store, how can you guarantee the code you see on github is the exact same code you're running on your device?
Admittedly this supply-chain-verification is an issue for all mobile app store apps but seems particularly important with something like a password manager.
In a perfect scenario you would be able to use a reproducible build [0], for Android you can actually get Bitwarden from F-Droid [1] which uses those reproducible builds.
For Google play store, there was also that developers needed to sign their apps before releasing to stores, so you knew that it came from developer, but Google removed that when they introduced app bundles. There is still a way to verify if the build is the same as developer provided, but automatic protections that were there are now gone [2]
Looking at that, it doesn't seem like you can actually get Bitwarden from F-Droid? That looks like instructions to set up a third-party repository (hosted by Bitwarden)?
The page didn't mention anything about reproducible builds. (Doesn't mean they aren't using it though, but that would be internal.)
You can see their server and client code here: https://github.com/bitwarden
I choose to use their clients unmodified, along with an instance of the server formerly known as "bitwarden_rs" running in my basement as the sync backend. https://github.com/dani-garcia/vaultwarden
I still pay them annually for their "freemium" features even though I prefer not to let them host my data.