I think something people are missing here is that outsourcing your SSO isn't just about whether it's technically easier to do so, or having the competence to run it. A significant factor is also being able to say on contracts with clients that you use a reputable provider for SSO, who are then on the hook for dealing with all the documentation around precisely how they do that, and how they ensure key material is secured.
In the case of 1Password its a bit more interesting, since they presumably already have all that sorted because that is literally the product that they're selling, but in most cases it's easier to say "no one in the organisation has access to keys" rather than "no one has access, except Bob, because he runs the SSO server, but we really trust him you know".
It's about managing liability at that point, it's not a technical concern but a legal/regulatory one. If you have a contract with Okta and they screw it up, you have better legal recourse than just firing the guy internally. Security isn't exclusively a technical concern.
But you can still get sued if a breach at Otka causes damage to your clients. That is: outsourcing doesn't remove obligation or liability - you can point the finger but are still on the hook.
Ys, you can sue anyone at any time for any thing, but you have to actually show cause, show harm, show standing, show breach of contract, etc.
So you sue me. I said I fulfilled all of my obligations according to the contract and since the fault was with a third party you knew I already trusted, I get your case thrown out because I can demonstrate you knew this was a risk, and agreed to hold me harmless in the case of a third party breach beyond my control. Or, I settle/lose and I or my insurance company then go after Okta for my losses.
You can't use reason and logic with the security people. They have a box that needs to be checked, they don't consider any possibilities outside of that box.
As a security person, agreed. Too few have ever written a line of code or shipped a product under massive constraints. However, those checkboxes do exist because engineers, IT, and everyone else involved have major lapses. I guarantee you 99% of orgs out there have a worse posture than okta and never find out they've been compromised.
I think it's important to admit that _all_ organizations have been hacked. Even the NSA. Most just never find out and if they do they have little idea what was compromised.
Worst-in-class is overstating it, but they've had three serious breaches in the last three years, not counting exposure of their private GitHub repos.
The thing that really bothers me about Okta though is that they've been caught lying when asked if they were affected by CVEs. See this thread from one of the Duo founders responding to folks (including one of the Cloudflare founders) being stonewalled by Okta during the fallout from Log4Shell: https://nitter.net/jonoberheide/status/1506280347306188805.
If Azure AD was compromised your clients were all probably compromised too, so in that way you’re probably fine. I’m sure that Microsoft/Okta/Google hold the keys in some kind of HSM or whatever, but there’s probably a Bob there that could get in if he really wanted to.
You have to figure out what you’re threat modeling for, most people aren’t worried about the big SSO providers being breached because it would be expensive to work around them completely.
Choosing a supplier is not washing hands of the issue, if you are a serious business.
If you are found to not have done due diligence and have chosen a known vulnerable supplier, you would be seen as equally or more liable compared to having yourself implemented a solution that was found vulnerable.
In fact, I imagine your customers can sue you in the former instance and likely win more easily than in the latter (IANAL).
Don't worry they have a provider too. Oh and that provider is your customer and they store their secret in your secret manager, but you didn't know that until after the post mortem. /s
It's remarkable how C memory safety vulnerabilities are the most talked about. But the most impactful ones are due to insight collapse from over-complexity caused by following so-called best design practices.
The people making noise about unsafe C and then recreating Byzantine monsters are often the same camp; it's part of the normalized incentive to keep yourself employed by building Rube Goldberg machines for simple apparatus.
The people who rant most (to zealot levels) about “unsafe C” are the most likely to write horribly insecure code in whatever their “safe” language of choice is.
It reminds me of early aviation tech like computers, autopilot, fly-by-wire, etc. In many cases accident rates went up initially because pilots (very wrongly) assumed all of these magical newfangled technologies could just fix and handle everything.
I’m not saying C isn’t fundamentally unsafe but your favorite memory safe language isn’t a cure-all and I can’t remotely understand how you could think it is.
> The people who rant most (to zealot levels) about “unsafe C” are the most likely to write horribly insecure code in whatever their “safe” language of choice is.
A longer story would be more convincing for those of us who have different experiences.
For sure UBI will stop people from doing that if everyone id getting wort 300k of tax free credits so they can work on things that actually make the world better
It feels unproductive to stage a discussion about anything around such unrealistic numbers. Very few people make 300k+, the average US salary is just under 60k.
Organizational bugs can be very impactful to the organization that has them. But the memory safety bugs are typically exploitable via automatic methods and affect all organizations using the software. For example, Heartbleed was so talked about because it was so impactful.
There's plenty of really nasty C specific zero days out there, but the bulk have rather limited applicability: witness the latest curl CVE. Potential compromise of the world's largest password silo however sounds clearly impactful. And whether Heartbleed was worse than log4shell is mostly philosophical.
> And whether Heartbleed was worse than log4shell is mostly philosophical.
Most definitely not. Heartbleed means that as long as your are on the internet they have your certs, full stop. Log4shell required access in both directions (to inject the vulnerable string, then for target machine to load the payload off internet) to do anything.
The sheer fact that you think it is 'philosophical' difference points to you not understanding anything about the topic. log4shell could be trivally prevented by common security practices like "not allowing your apps to freely access anything on internet", heartbleed could not.
It's just complexity. Stuff gets missed, stuff gets updated but the cascading effect it will have on other logic gets overlooked,etc... complexity breeds vulnerability. But scdlt, layered security defenses and countermeasures can reduce risk. I like to think our inability to grasp vast levels of complexity is the root cause.
What can be measured tends to get optimized. We have numerical, easily gatherable data on software vulnerabilities. However we, imo, lack information on how (and sometimes even when) organizations get data stolen. The reasons behind the latter are most likely complex, as they pertain to humans and as auch it's easy to look for a rescue in technology.
You start with a small product that you fully understand. Eventually you hire more people, you onboard them and they understand the whole product, but ate mostly focused on the big chunks of it and how they interact. Then those people each became a team of mixed quality of people. Sone know how things work, some don’t, but together they can figure out their things in the part of the product they own and can talk with other teams to get things done. Eventually everyone who was there at the step 2 leaves. Skip 10
years of growth, adding features and the usual attrition rate (assuming better people have more mobility and opportunities). At this point you have a critical mass of people who can work on a product, but don’t know how we got here and what not to do. At some point fundamental assumptions change due to external events and the system as a whole doesn’t solve original problem in a most efficient and elegant way. It is what it is. Throw in some managers hired from bigger co who know how to things the right way
All of this security policy relies on memory safety. You build policies on the idea that you can trust some broker to manage and enforce it. Memory safety vulnerabilities subvert all of that, that's why they're so serious.
Consider an SSH server. The most likely attack against it isn't a memory safety vuln, it's stealing someone's key. But if there were a memory safety vuln keys wouldn't even be a part of the conversation - that memory safety vuln would allow the attacker to bypass policy altogether.
> The IT team member’s credentials for all systems were rotated, switched to only using a Yubikey for MFA,
I hope this spurs them to switch all employees to use Yubikeys for 2FA. Anything less than FIDO2 is really weak.
Can someone explain why people still choose Okta? I personally feel way more comfortable using GSuite as an IDP. Okta suffered a pretty terrible breach and, candidly, I hadn't heard great things about their security practices before that.
Yubikey or other browser-based MFA would not have mitigated this attack - the attacker obtained a valid administrative session token from _after_ any MFA would've been completed. MFA or not, this attack would've happened regardless.
Enforcing hardware-based MFA is good practice and may protect against future attacks (the attackers might be back with a spear-phishing campaign?) but is completely irrelevant to what's actually happened here.
> Can someone explain why people still choose Okta?
Nobody's been fired for buying IBM. Also blame outsourcing - if your run your own Keycloak and get pwned it's on you, if it's Okta then it's not your problem.
Classic observation about security: any amount of damage is acceptable so long as you're not culpable. All you have to do is get insurance to cover it and/or tell your data-breach'd users to suck it up.
Microsoft literally lost private keys which gave access to all emails to state actors of hostile government. How worse you can get? And nothing happened, they are too big to fail. Everyone who used them as email provider are fine too. Cosmos ether absorbed all the damage.
> MFA or not, this attack would've happened regardless.
Yeah sorry, I shouldn't have implied otherwise/ should have been clearer about that. I just thought it was notable that admins were not using stronger 2FA. This is a company that owns password management, it seems like having proper auth should be a number one priority.
Of course, once a session token is created you are past the initial point where a 2FA mechanism like a Yubikey would help.
> if your run your own Keycloak
Sure but I think most people would consider Google to be a far safer company than Okta, no? If not I think that they should be. TBH I feel like "session token used from a totally different computer/UA/IP/Region" is something Google may actually have caught.
> the attacker obtained a valid administrative session token from _after_ any MFA would've been completed
But you can lock session tokens to specific IPs or user agents. I've implemented similar in the past for a B2B admin-panel, and whilst there were the occasional false positive with browsers updating in the middle of a session (incrementing the user agents version number) and people's IP changing if they switched networks (or in one instance, a badly configured office network that randomly routed through 2 proxy servers with different outbound IP addresses) which then made it demand MFA again, it was fairly rare and didn't attract too many complaints.
FIDO2 wouldn't have helped the customers' accounts since valid session tokens were obtained. However, hardware tokens for the Okta customer service accounts may have blocked the threat actor's access depending on the (undisclosed) method of attack.
> I hope this spurs them to switch all employees to use Yubikeys for 2FA. Anything less than FIDO2 is really weak.
The session cookies were stolen. 2FA has nothing to do with this.
> Can someone explain why people still choose Okta? I personally feel way more comfortable using GSuite as an IDP. Okta suffered a pretty terrible breach and, candidly, I hadn't heard great things about their security practices before that.
Have you actually used Google Workspace for anything serious? It's one of the least capable IDPs out there, with Okta being one of the most capable. Comparing the two is like comparing a bicycle to a motor bike because they both have two wheels. Frankly, I'm surprised to read anyone on a technical forum recommend Google Workspace as an IDP.
I’ve used and implemented both. A case can be made for Google Workspace as an IDP if you don’t need all the bells and whistles. It is extremely secure and generally, engineering practices at Google give me much more confidence than Okta.
I’ve spent quite a lot of time interacting with Okta support over system issues and frankly that experience does not leave me assured. My sense is that perhaps they’re wrangling software that has grown exponentially in size and complexity as they try to accommodate the myriad use cases.
I've used and implemented both too. You immediately out grow Google Workspace as an IDP whenever you need to do role based access control, like AWS Session Manager for machine access instead of long lived SSH keys. Which is a pretty major shortcoming for an IDP to have.
I'm not suggesting everyone should use Okta either. Frankly I'm not the biggest fan of it myself. But I wouldn't argue it's less secure than Google Workspace when the big G forces you to workaround it's limitations with less secure implementations.
100%, living this for many years. Google IDP is great for a shop who doesnt need elaborate identity needs. It quickly falls apart when trying to manage an enterprise of any complexity.
Okta is quite flexible and supports a lot of tech you want (WebAuthN/SCIMv2 provisioners for popular platforms/all the SSO/API integration/workflows), but comes with it's own set of warts and dysfunction (api rate limits, quirky AD integration with anything complicated).
Probably any of them would be suitable, if you are comfortable building your own custom tooling AROUND their APIs. Almost none of them will do exactly what you need out of the box.
> The session cookies were stolen. 2FA has nothing to do with this.
I had replied 5 hours earlier than your post already saying that I understood that. I was just quoting from their report because I think it's interesting.
> Have you actually used Google Workspace for anything serious?
Yes lol I'd rather not comment on which of the companies I've worked at that used Google as an IDP but you're welcome to speculate - they all have thousands of employees and followed AWS best practices around user/role management. I can say that we used it at the company I founded as well, though.
I've seen places appear to use Google as an IDP but the reality was that Okta or Active Directory held the actual corporate identity management with a few select developer tooling hooked into a secondary domain hosted in Google Workspace. Those companies appear on the surface to be following best practices but leaver are constantly forgotten about because corporate IT will remove them from AD and nobody will tell "DevOps" (or whoever owns Google Workspace) to remove those same leavers as well. Which obviously then creates significant risk.
Often these places end up with two domains because working with Corporate IT is painful for agile development scrums. But the issue there is company culture, not the IDP.
I'm yet to see anyone use Google Workspace effectively at enterprise scale (startups, sure. But startups optimise for speed of development, not long term scalability).
> Can someone explain why people still choose Okta?
Most people don't choose Okta. Somebody at the executive level chooses Okta and everybody else gets to deal with it.
Yubikeys mean that your IT department now does customer support for every single password issue instead of handing it off to the outsourcing company.
MFA has two problems: the technical one (technical security aspects of implementing keys, fobs, phones, SMS, whatever) which is almost irrelevant and the social one (customer support, forgotten passwords, key through the wash, can't get email, etc.) which is the gigantic one.
Everybody wants to outsource the gigantic, dumbass customer support role of security. The problem is that everybody is also incentivized to cut corners once having done so.
They are, my point (which wasn't very clear) was that somebody needs to do the customer support.
If you just do YubiKeys but not Okta, your staff takes all the calls. Yubi sure isn't going to do the customer support when Employee #46 can't log into Office 365 today.
That's why people outsource this to Okta and its ilk.
So - I work for a company that uses Okta. Our Infosec/IT group also rolled out Yubikeys. I need to work with IT when I'm having problems with my Yubikeys.
When we SSO to Okta - they will for some applications (VPN, Github, etc..) require us to MFA with our Yubikey. But for some other, lower risk applications, MFA isn't required.
When an employee has difficulty connecting to one of the 84 applications that we SSO via Okta - they file a ticket with IT, not Okta. There is no way any employee would even know how to contact Okta, and there isn't any way that Okta could troubleshoot for the employee anyways - the source of truth for their password (first factor) would be the corporate Active Directory server.
Okta doesn't (to my knowledge) provide any customer support (I guess they might work with IT if there is a service-outage) to our employees. It's just an IdP SaaS.
Can Okta actually manage your enterprise's Yubikeys for you? I wasn't aware that was a thing since I've never heard anybody use it - everybody just endured the burden of managing their Yubikeys themselves.
I think your point is still confusing. No one is suggesting replacing Okta with a Yubikey, so whether one chooses Okta to offload support requests or not has no bearing on Yubikeys.
> Can someone explain why people still choose Okta? I personally feel way more comfortable using GSuite as an IDP.
GSuite as an IDP is very very limited. Sure, it checks that box as "Yes, is IDP"
However if you want to do much beyond "can connect", it's either difficult or impossible.
Take for example you're an organisation that has all employees in GSuite.
You also have an AWS organisation, and need to provide access - well, AWS SSO[1] is the recommended way to do this. I set up the connection, and people can now connect.
However there's some gaps:
There's no automatic user [de]provisioning into AWS SSO, based on GSuite user groups. (You could write some code to do this via the SCIM support in SSO, but you have to maintain it)
There's no way on either the GSuite or AWS SSO side to enforce an MFA check when the SSO session is being set up.
GSuite doesn't let you require an MFA check before authenticating to SAML applications.
AWS SSO doesn't allow forcing MFA check when using an IDP, even though it does if you use it's internal Directory.
Okta, and similar products (can) do those things for you, and allow some of those MFA checks to be based on what endpoint is being used. At least according to their marketing materials and sales people. I've never actually done it myself.
I guess the tl;dr is that Okta provides a lot more options for automation and security glue between the identity provider and consuming application(s).
[1] When I say AWS SSO, I mean specifically the AWS product: "AWS IAM Identity Center (Successor to AWS Single Sign-On)"
Interesting, thank you! I'll note is that GSuite does allow for you to enforce Context Aware Access checks on every SSO, though that does not include a separate 2FA in the traditional sense.
Lack of auto-provisioning seems like the biggest issue to me.
> There's no way on either the GSuite or AWS SSO side to enforce an MFA check when the SSO session is being set up.
I don't know what you mean here but I'm curious if you wouldn't mind?
Much of what you're talking about seems to be when a 2FA is forced. With GSuite it's forced on login, and CAA is forced on SSO, and that's it. If you want other 2FA prompts (such as with AWS) those are configured through that service. I think you're saying that Okta would allow you to force the 2FA on login to and also then when you want to access some other system via SSO it would ask you to 2FA again?
It's only available in the Enterprise plan levels.
> I don't know what you mean here but I'm curious if you wouldn't mind?
Using the prior example where GSuite is my IDP. If I want to require an MFA check before granting access to AWS SSO - I've got no way of doing this.
CAA might help, but if I want to be explicit "Hey, this has access to customer data - we're always going to do an MFA check on this access", there's no good way of doing this.
Using an intermediary like Okta would allow you to do that additional level of checking.
e: To be clear, I'm using AWS SSO as the destination application as an example. There's other applications that you might want to control access to that also have particularly sensitive data where you want to re-verify that someone else hasn't just walked past an unlocked laptop and decided to poke around.
> It's only available in the Enterprise plan levels.
It can be bought separately for Business iirc.
> there's no good way of doing this.
Through GSuite, agreed. But you could just configure the 2FA for the downstream service, no? That's what we did at my company.
> There's other applications that you might want to control access to that also have particularly sensitive data where you want to re-verify that someone else hasn't just walked past an unlocked laptop and decided to poke around.
For sure, I'm not trying to argue or debate or anything, was just curious.
> But you could just configure the 2FA for the downstream service, no?
Only if the downstream service supports it. Lots don't, or at least don't enforce it when you come in via SSO.
AWS SSO being the one I keep coming back to, because it bugs me so much.
For those that do, you now have to also manage the 2FA tokens with that service, using whatever they support. Often that's SMS based 2FA, or maybe TOTP, or their own custom TOTP/Push. Maybe they support FIDO2, but only a single FIDO2 key.
Yeah makes sense, thanks. I think I'm fine with a 2FA on GSuite login + CAA on subsequent SSOs but I do think it would be nice to be able to force another 2FA for sensitive stuff like AWS.
Unfortunately, GSuite seems to move very slowly :\
Do you have any proof Okta is better or worse than any other solution? Measuring security is very hard.
Here is the evidence I have seen regarding Okta's security:
1. The had a breach last year.
2. They just had another breach. The breach was in their customer service department/system.
3. They may or may not have been slow to disclose the breach (remember, they may not have realized they were breached because they may get a lot of false breach reports and they may not have found evidence of a breach until recently).
4. They did not give credit to the customer who first reported the breach.
5. They may not have communicated with the customer after the customer reported the breach.
Here is what we don't know:
A. How skilled was the attacker. Skilled attackers are better at covering their tracks and harder to catch.
B. How many breaches has each ID provider had?
C. How many breaches has each ID provider detected?
D. How many breaches has each ID provider detected and covered up?
E. How many security bugs are in each provider's service? How serious are the bugs? How easy are they to find?
F. How good is the provider at detecting breaches?
G. How well are the provider's employees trained?
H. What percentage of the ID provider's employees care about security. A lot of people in the tech industry (software engineers, IT/sys admins/devops, managers, etc.) claim they care about security but their actions say otherwise. Examples include using easily guessed passwords, not patching software/dependencies, writing insecure code (buffer overflows, SQL injection, cross site scripting errors, etc.).
My main point is bashing Okta because they reported a breach does not prove Okta's product is any worse than any other product because we just don't have the information. We don't know how good other products are and it is even possible Okta is better than some or all of its competitors (it could also be worse).
Unless you’re a tiny company, there are legal/regulatory requirements to disclose breaches.
When it comes to AuthN, the right number of breaches is 0. Same argument for password managers, and why I advise people to stay clear of lastpass.
We all know that breaches do occur, it’s impossible to be 100% secure etc. but having multiple breaches when you’re a security service provider is simply unacceptable. And when the timeline shows you were slow to react, it’s negligible for anyone to continue using that service provider.
Data is great, but in lieu of it, that’s enough for me.
Hardware keys do not prevent session hijacking. If your IT department is empty-headed enough to share active session cookies, you are toast no matter what MFA you use.
Tying sessions to the user's IP would help prevent this, but it comes with it's own issues.
Even better don't put admin tools on the open internet, put them behind a VPN like wireguard. Then if the session cookies are stolen, they can't even access them anyway.
I wonder if we will see "zero trust" going full circle, and back to isolating privileged systems from the internet, on the basis of endpoint compromises stealing user session cookies.
Just making the authentication and service endpoints inaccessible online significantly reduces the attack complexity.
There are a bunch of things folks forget when talking down zero trust; EG if it is done properly then the endpoint the user is using has to AuthN too, and be in good health. That solves cookie/token theft, forcing the attacker to fully route through the endpoint.
If you have a hardware second factor that does not require user interaction (or that has a low-security tier that does not require user interaction), couldn't that be used to transparently re-validate existing sessions with zero inconvenience cost? Keep challenging that factor with the existing session and a nonce. Am I missing something?
And even if you don't have that, when authentication is your core business you might want to harden your sessions against transmission contents getting in the wrong hands by setting up an additional factor in the browser's local storage that gets challenged without putting the key on the wire. Sure, that local storage is no secrets vault, but you don't write some existing secret there, you set up an additional key that would protect e.g. from a .HAR playback. That's the nature of multi-factor tiered defense, protection from just one angle is protection nonetheless, perfect must not be the enemy of good.
That's actually what GSuite's Context Aware Access does if the downstream service supports it (GSuite does, ofc). It will rechallenge your client on every request (client can cache). Exfil of the token won't work if that challenge includes hardware verification.
When you choose Okta you can be sure they still exist in a few years. With GSuit as an IDP Google might decide that there aren't enough users and kill the product randomly at any point in the future as they have done with Google Domains and countless other examples
I think there’s an interesting statistical argument here.
Google supports billions of accounts and clearly screws up a few of them (which are noteworthy events precisely because they are so rare).
Okta’s total accounts under management are a fraction of this number and they clearly have compromised or put larger percentages of such accounts at risk given what’s been disclosed.
> Google supports billions of accounts and clearly screws up a few of them (which are noteworthy events precisely because they are so rare).
The problem is that Google doesn't (seem to) have any kind of customer support. Everyone's gonna get hacked at some point, including Google themselves, so I'll choose the company where I can at least get somebody on the phone.
At least Okta is trying to fix their mistakes. Google doesn't unban wrongfully banned users even when the New York Times publishes an article about an individual case <https://news.ycombinator.com/item?id=32538805>.
Are you sure they didn't, after that article was published? "Google eventually fixed the problem a few months after this story was published" is not a story the NY Times is that interested in publishing, nor is "it turns out there was a reason this account remained banned" a particularly compelling follow-up story.
Their security record is fantastic, but afaik they don't do what okta does.
Okta: comprehensive scim allowing your IT instead of random application admins throughout your company to manage user provisioning / deprovisioning. Start pages that don't require users to remember urls but instead show them a list of applications they can use. Adaptive MFA with IT-administered settings (though Google's super-enterprisey solutions may have sth here.)
Not sure that "security" should be the top reason for switching from Okta to AzureAD.
Also, as a "user" (in IT), AAD is a royal PITA to use, with stupid limitations, such as not supporting group hierarchies. And don't even get me started on the horrid UX of the Azure portal. I've never used Okta in any capacity, though, so I don't know if it's any better.
At this point any self-hosted option would do just as well. Not perfect, but not only it would require a targeted attack to pwn, but if you do end up pwned at least you weren't paying tens of thousands a year for the privilege.
This is not a serious answer. It’s very unlikely that “any self-hosted option” actually competes with Okta’s IDP solutions, has really decent security, has very good logging features, has behavior analysis features to detect login/session anomalies, and has security staff to respond to those threats quickly.
Saying “just self-host anything” makes a massive assumption that your company can afford to compete against Okta/Google for the same talent. Some companies can, but the vast majority of Okta’s user base can’t. You would be fighting against specialization and comparative advantage.
It might be possible that there are significantly better options than Okta, but I have yet to find them.
Okta has a major insider threat problem though (either their employees/subcontractors get pwned like last time, or their infra gets pwned like now). I would expect much better from a "security" vendor, and since they consistently fail at it, I wonder what else they fail at that we don't (yet) know.
> has behavior analysis features to detect login/session anomalies
This is what baffles me. Given the description of the attack some attacker reused a stolen session token from a different IP address (not sure if they bothered to spoof the UA) - how was this not immediately challenged, especially for a high-value account? I indeed expected this to get flagged immediately.
> that your company can afford to compete against Okta/Google for the same talent
Not sure about Google, but given the (repeated!) breaches, Okta can't compete for talent either, or that talent isn't actually everything.
A big advantage of self-hosting is that you reduce your exposure to opportunistic and "for the lulz" attacks - if someone breaches Okta, it's trivial for them to automatically pwn everyone. If you self-host, they'd have to know you exist and target you specifically - that doesn't scale. Plus you can layer extra security on top such as VPN and then your IdP is invisible from the outside and a potential attacker would first need a VPN exploit before they can even do the initial recon to find out what's your IdP and its vulnerabilities. Can't do that with Okta.
The consistent pattern of breaches and their nature makes me believe they are not a serious vendor worthy of the price or the trust people put in them, and missing a better option I'd rather self-host - all else being equal, at least it would save on the fees.
> Saying “just self-host anything” makes a massive assumption that your company can afford to compete against Okta/Google for the same talent.
I have always found this argument so interesting. I host tens of services myself on a Raspberry Pi, with better uptime than many of your favorite cloud services. If a company cannot find even one person, earning a software salary, to host an open source IDP for tens or hundreds of employees, either something is deeply wrong with the company, the sysadmin talent pool, or both.
The problem isn't hosting, it's the maintenance and plumbing.
Companies learn the same lessons painfully when it comes to build-vs-buy. There is no such thing as "buy". There is "build", and there is "buy-and-build". Purchasing and deploying (or in executive speak, "implementing") a particular solution is at best 30% of the work - and less than that of the cost. The remainder goes into integrations, maintenance, customisations, ongoing configuration, adapting to ongoing user behaviour changes, adapting to ongoing development needs, etc.
Running an IdP is one of the very few ventures where four nines of reliability is still inadequate. Once you have a centralised auth, it CAN NOT break.
Also when you self host, you don't need to "compete" with alternatives. Your goal isn't to provide service on the same scale as the commercial alternatives, but to provide a service to solve your local problem.
I guess I'm curious as to how someone self-hosting would be able to perform their security better than a Tier 1 iDP. It would be the equivalent of trying to self-host email.
Also - never having worked for a company that hosted their own - what's the experience like - with Okta, you can essentially SSO onto hundreds (thousands?) of different cloud service providers. Similiar if you self-host?
> I guess I'm curious as to how someone self-hosting would be able to perform their security better than a Tier 1 iDP.
Not saying you can do better, just saying that you can get the same level of quality for free without paying for an incompetent & expensive snake-oil vendor.
But if you want a non-exhaustive list, there are a few lessons we can learn at Okta's expense:
Not outsource support to low-wage boiler rooms could be a good start - that's what got them their previous breach some time ago, and the current breach isn't that far off although it seems more down to the support infrastructure than the people themselves.
If you can, restrict access per IP address. Even better, if everyone works out of the office or VPNs into it, don't expose the service publicly in the first place. Defense in depth, something we seem to have forgot in the process of putting everything on the public internet.
If you see an existing session token suddenly show up with a different user-agent and IP address (not matching previous usage patterns or expected wifi-to-mobile handover), invalidate the session and ask the user to reauth again.
If possible, high-value accounts need to be IP restricted in the first place to known IPs or at least subnets (if the employee is on a dynamic IP).
> I guess I'm curious as to how someone self-hosting would be able to perform their security better than a Tier 1 iDP.
If you use Okta, every single Okta support person is a super-admin in your environment with access to every role in every app. It is trivial to do better than that.
to the list of Okta certifications in your bookmarks? You absolutely did not do a search for it just now because that is a stale link so there is no way you even clicked through that link before pasting it and you already knew what was supposed to be on that page.
and they are a total clown show. In fact, basically all of the clown shows have those because all most of those standards amount to is verifying your paperwork is consistent.
I got that back from a search and do not have it in my bookmarks, and I did quickly skim it before pasting as well. You have a creative imagination about how I use the internet though, sorry.
returns a 404, page not found. There is nothing on that page.
I replied to you at most a few hours after you posted that link. Are you going to claim that Okta updated the url in that few hour long window? For that matter, you did not even double check the link after I said it was stale or you would have used that argument in your post since it is the only credible reason to claim a dead link had contents.
Once upon a time we purchased 1Password for a set price. The software was completely offline, storing the encrypted vault locally or in dropbox. No need to worry about hackers because there was nothing online to steal.
But somebody ran the numbers and decided the way forward to increased profitability was to move everyone's data to the cloud and charge a subscription.
And here we are, wondering if our data was compromised.
> But somebody ran the numbers and decided the way forward to increased profitability was to move everyone's data to the cloud and charge a subscription.
And, you know, the huge increase in convenience for all those people not savvy enough to do it themselves.
Also as far as we know, how is it different to have your vault in Dropbox? Dropbox could be hacked and that data is famously not encrypted. We don't know of anyone's 1PAss vault actually being breached yet, do we?
1. Setup to sync via icloud was very straightforward and 100% fine for "non-tech saavy"
2. Why would it matter if dropbox was hacked? Your vault password was never sent to to dropbox. It was just a dumb store for an encrypted vault. The calculus changes now that the vault is online and stored by the same party you're sending the password to.
> Your vault password was never sent to to dropbox. It was just a dumb store for an encrypted vault. The calculus changes now that the vault is online and stored by the same party you're sending the password to.
You never send your password or account key to 1Password. Each side authenticates the other via cryptographic challenges and you receive the same encrypted database that 1P stores, as a dumb file host. They have a whole whitepaper on the security design of 1Password accounts: https://1passwordstatic.com/files/security/1password-white-p...
Technically, the earlier OPVault format stored on Dropbox/iCloud/locally was less secure due to generating a key just from your password.
> 1. Setup to sync via icloud was very straightforward and 100% fine for "non-tech saavy"
As someone who did support for 1Password years ago, this is patently false. It was "fine" for tech savvy users, for everyone else it was a big opportunity for problems.
New "issues" came about from the switch to a hosted solution, but data syncing issues, mostly, disappeared.
Also, to confirm the other commenters, your password is never sent to 1Password in any situation where syncing is involved, whether it be Dropbox/iCloud or the hosted solution. And with the hosted solution your account key is also never sent to 1Password. This is also well documented in their Security White Paper.
I think you should read the 1Password security whitepaper before rambling on about things you clearly haven't spent the time and effort to learn about.
However, in their white paper they specifically have a section "Crypto over HTTPS" which outlines the risks of their new web UI. Yes, the password stays local if no one mucks with delivered js, however, 1password being compromised would allow serving of modified js.
This is a new vector only present due to their new web vault model + associated web UI features. They state it themselves in the whitepaper:
"The authenticity and integrity of the web client depends on the security of the host from which it is delivered. An attacker capable of
changing the web client on the server could deliver a malicious client
to the user"
1P could be 'compromised' and send a malicious version of their software back before they had the subscription model... I don't see how this is involves any more risk.
Was sync to iCloud and Dropbox through a single file? Because there’s no chance to merge databases if they get out of sync. Proper cloud support can handle this.
> A member of the IT team was engaged with Okta support, and at their request, created a HAR
file from the Chrome Dev Tools and uploaded it to the Okta Support Portal. This HAR file
contains a record of all traffic between the browser and the Okta servers, including sensitive
information such as session cookies. In the early morning hours of Friday, Sept. 29th, an
unknown actor used the same Okta session that was used to create the HAR file to access the
Okta administrative portal
Like your bank tells you, don't give the support person your password.
> Like your bank tells you, don't give the support person your password.
Sure, but was the user aware what the HAR-file actually contained?
At the least all active sessions should be cleared after sharing something like that. But that hinges on you knowing about it. Support should also make it mandatory/automatic.
It seems to me like the browser should not include cookies in HAR files by default. Sure, have a way to enable including cookies, but it should be behind a scary warning informing the user that the cookies likely contain sensitive data.
How about: People shouldn't send around HAR files that contain sensitive information, or at least make sure the information contained is no longer sensitive (eg. by flushing active sessions)?
HAR files are a debug tool. If I have to debug a problem with a webservice, I require them to contain all the information that was sent/received by the browser. The browser arbitrarily deciding to delete part of that information, would make it worthless to me as a debug tool.
Reminds me of that banner Facebook writes to the browser console to warn people of pasting stuff that will hand over their session to third parties. Handing over a HAR is just as bad if there is any form of session involved.
Browsers could add even more nag screens between the user and the tools, but those have zero effect once the assumption "I'm talking to a person from the hoster" is established. It's the old "put on a safety vest and a hardhat and you can walk anywhere" hack that only training can protect you from. And even with the best training, you'll never reach 100%. That's why you need many tiers of your operation is as sensitive as selling a trust store.
It's well possible that 1Password are still far from being breached thanks to tiers, but it's interesting to see even people working full-time on the conflict between authentication and convenience struggle with that balancing act.
There have been several times when I was working on triaging a customer reported bug where I wanted to ask for a HAR file (through customer support), but didn't, because I didn't want them sending me (or the support intermediary) their session cookies. And having the cookies wouldn't have helped me debug it.
> And having the cookies wouldn't have helped me debug it.
But having that cookie, or any other data from my dev environment, after I manage to recreate the reported bug internally, has often helped me debug problems.
The point here is, debug tools that delete information are less useful. Worst case, they are useless.
Yes, such tools can be problematic. With great power comes great responsability. But I rather have powerful tools with a big 'ol warning sign, than useless tools.
This sounds to me like the correct way to plug this leak. It will benefit additional use cases involving the HAR export beyond individual support portals sanitizing it before storage.
The "Copy as cURL" feature has the same issue. As does highlighting and copying request/response headers to the clipboard.
However, less sensitive data should ideally remain easily transferred, without much effort on the part of the user.
Based on the activity logs provided by Okta for their Support Portal, the HAR file had not been accessed by their support engineer until after the events of the incident
That tells me either the attacker either removed the access from the logs OR the attacker figured out how to access the HAR file without having the access recorded in the logs.
>Oct 21, 2023: Okta confirmed publicly that their internal support systems were compromised. This answers how the HAR file was accessed by the attacker and that the initial compromise was not through the employee’s laptop.
> Like your bank tells you, don't give the support person your password.
Sure, but they uploaded it to the okta portal, not to any random support person. Most users would expect that people getting files from the company portal would be cleared to see confidential and sensitive stuff.
I think the fault here is squarely on Okta. They're asking for these plain-text HAR-encoded sessions for troubleshooting but crossing their fingers hoping the end user properly sanitizes it.
Why not sanitize the HAR data on upload so that by the time it hits your system and is available to your underpaid and understaffed support techs it's completely sanitized to only the relevant and non-secure portions?
HAR is structured data and is very easy to sanitize programatically. It's not rocket science.
And people keep asking me why I use a self-hosted, self synced, password manager instead of using one of those super-easy, super-helpful online services to do it for me.
For the same reason why I don't throw my apartment keys into the local train stations safebox.
You may feel safer that way, but it is not. If a dedicated attacker can breach Okta, they can surely breach your self-hosted, self-synced password manager you manage, which you forgot to update on time, or you don't get an update on time.
Remember, these organizations fix the issues weeks, sometimes months, before they release the statement.
If you use open-source and a critical bug is found, you'll get a patch with a press release, while all other large services fixed that already. For average Jane or Joe, the risk-benefit ratio favors services against self-hosted solutions.
An attacker would first have to find my system, specifically, and then breach it. In terms of the most common method of breaching systems, social engineering, it is me, a single software engineer with a very solid background as a sysadmin, compared to a staff of many hundreds or thousands of employees at a large, visible company.
So simply in terms of attack surface, exposure and discoverability, doing that is a REALLY tall order. Many animals on this world survive not because they are huge and strong, but because they are tiny, fast and next to invisible.
Economics play a huge role in attacking systems. Targeted attacks are time consuming, costly, and if the end result is one guys passwords, usually not worth it. People carrying out such attacks want to use a dragnet, not a fishing rope.
> If you use open-source and a critical bug is found
...then many many many large organisations have the same problems as I do, only while being a lot more exposed and visible than me. Because the software I use relies on the same standardized, battle tested, vetted and re-vetted for years technologies as many commercial products.
False, because it's not only a matter of technical difficulty, but also one of economics. Very unlikely for your average person to be a victim of a deliberate targeted attack.
I suppose in that sense hanging out with a rich friend would mean you're very unlikely to be kidnapped.
While probably true most of the time. It does give you a false sense of security, which makes you a very easy and potentially profitable catch. If you fail to update and a scripted bot catches you, the actor notified will certainly see what's in there.
And if anything you're more likely never to find out. Which means they can just come after a few months later. Maybe they stole your first CC and then your second CC. I think there is possibilities there that make sense.
Not targeted, but a random attack and data leak is very likely. Much more likely than if they used the service. That's why 99.999% of people should stick with the services.
A random attack that just happens to breach specifically my home server, that uses custom made service scripts to enable syncing to my devices, isn't reachable from the internet, and is airgapped from my DMZ?
Which then somehow manages to exfiltrate and decrypt data that is still encrypted with a public key, the private key to which is not stored on that system, and itself encrypted symetrically?
Yeah, that doesn't sound like a "random attack" to me, that sounds like a subplot in a Keanu Reeves Movie...
There is probably a half way decent chance that the vast majority of participants here, in singular, that would be economically gold mines for compromise.
A bad term, I would say now, "most people." For me, hosting a password manager is like hosting an email server. There are situations where it makes sense. But for 99.99% of people, services are just good enough. Not to mention what a gold mine emails are if they get compromised.
> If a dedicated attacker can breach Okta, they can surely breach your self-hosted, self-synced password manager you manage, which you forgot to update on time, or you don't get an update on time.
Why would you assume the self-hosted alternative even has a server to be breached? If this is the same Okta breach from this week it was a human support channel that was breached. There's nobody like that in front of my setup, and no server or open ports.
Good for you. But the average user wants a password manager on all their devices, including web and mobile browsers. They don't want keepassx and to keep track of a single file. There's nothing to gain, except a false sense of security.
The preferences of normal users aren't relevant for the privacy and security minded. They will always choose the easy route, which was the point of the comment you initially replied to. Here's a quote:
> And people keep asking me why I use a self-hosted, self synced, password manager instead of using one of those super-easy, super-helpful online services to do it for me.
The security of self-hosting and keeping the backups up to date (trivial to automate for computer-literate users) is not false compared to getting pwned by customer service with enough access to be dangerous. You're making it sound way more difficult than it is.
Irrelevant. 1password passwords are encrypted with a key only you have. I HIGHLY doubt that you can keep your homeserver more secure than 1password can its servers.
Irrelevant. The risk of a targeted attack is much lower than the risk of being part of an online attack. I doubt there's anyone on this site who hasn't been part of one of the latter but for someone to decide that I have something worth stealing they must first know that I exist.
> 1password passwords are encrypted with a key only you have.
So is my password store.
> I HIGHLY doubt that you can keep your homeserver more secure than 1password can its servers.
I also highly doubt that my trousers pockets are harder than the 1.5cm thick hardened-steel-doors of the storage lockers at the local train station, or that my pyhsical constitution is superior to those of the trained security guards they have there.
And yet, guess where the keys to my apartment are kept. Hint: Not at the train station.
Security is not an absolute measure. It's a cost/benefit tradeoff. 1Password may have customers that make it economical for an adversary to spend $$$$ to breach it despite "better" security, whereas your "less" secure home setup may not be worth the effort.
I wouldn't worry about a targeted attack if I was "nobody" and I was self hosting. Likely bitwarden? I'd worry about an attacker scanning and exploiting every instance they can find. Scanning is cheap and provides value in aggregate.
That feature only exists on family/team accounts, and in that case the account that is allowed to perform recoveries has an escrow of the vault passwords of other team members.
The user who currently holds escrow can distribute those recovery keys to other accounts in that family/team/enterprise. This is why 1Password SaaS forces you to have at least one account admin (aka the user with recovery keys). If you somehow have 0 account admins, creating a recovery key -- without full decryption access to a vault, aka, user still knows their password & account key -- is impossible.
Ain't nobody giving shit about your pictures kept on home server. Security measures need to be relevant to chance of attack, they do not exist in the void.
And so mediocre protection of irrelevant target can be far more effective than good protection of juicy target
I understand your point, but if my password vault was a self hosted piece of software and a self hosted vault file, I'd be nervous every day of losing data.
And I am, same as I am worried everyday about losing the keys to my apartment. I am speaking as a person who once only got them back by sheer luck (and a young mans honesty), after they fell out of a hole in my trousers pocket.
However, I would be even more nervous if the security of these keys were up to someone other than me. For example a random employee of a big company, whos access to the system I have no say in, who I never met, and whos actions I can neither see nor regulate.
Bottom line is: I prefer worrying about myself failing, than someone else. Because I can do something about the former.
And while I am certainly not qualified to fly an aircraft, I do feel that I am quite qualified when it comes to software engineering and systems administration.
I think the self-hosted bit is just for syncing, as long as you have multiple devices its not likely to lose data even if you don't follow the 3-2-1 backups.
I think this works well if you're already invested in GPG for code signing or something and use terminals a lot. It's a little unusual as key management uses asymmetric crypto, so you can add a password without opening your keyring. The passwords are stored in directories and the directory names are the plaintext name of the password (i.e. the site and username). So it doesn't offer privacy. Tampering is possible, you can delete a password without unlocking the keychain by deleting the directory. So it only really protects read access to a password.
We use similiar gpg-based one in our Configuration Management infrastructure, works really well.
The data files are encrypted with keys of the CM main servers and other admins, GIt has history in case something that should not get removed got removed and if you want you can also force that the data will always be signed with certain keys on server side git hooks if you want to have more accountability than just git logs.
With little config even git diff/git log work showing unencrypted (if you have right key) content.
BeyondTrust reported the issue. Cloudflare is the first known Okta customer targeted, and 1Password is the second known. I think that’s what Ars is saying?
All 3 customers detected it quickly on their own. It sounds from their blog posts that BeyondTrust and Cloudflare both had good alerting that immediately alerted them to the problem. It sounds like 1Password's alerting wasn't as good, and they got lucky that the impacted employee saw a suspicious email saying "here's the report you requested" thought "I didn't request that report" and reported the suspicious email to the security response team.
All 3 customers discovered the issue before Okta fixed the issue. I think all 3 reported the issue to Okta before Okta fixed the issue, but I'm not sure. 1Password and BeyondTrust both reported the issue to Okta before Okta discovered the issue (although 1Password wasn't sure whether the cause was a compromise of Okta's support or malware on the laptop; 1Password didn't find malware, but still thought there might be some). I'm not sure whether Cloudflare reported the issue to Okta before Okta fixed the issue, but it seems likely due to timing. Cloudflare was compromised Oct 18 and immediately discovered it, and the very next day Okta fixed the issue, whereas previously the issue had been languishing for 19 days. That seems to indicate to me that Cloudflare reported it to Okta and Okta kicked their incident response into high gear due to Cloudflare's report.
All 3 customers posted blog posts about the issue after Okta posted a blog post about the issue.
The order they were targeted was: 1Password (Sept 29), BeyondTrust (Oct 2), Cloudflare (Oct 18). I think they reported the issues to Okta in that same order (BeyondTrust reported the issue to Okta the same day they were targeted, I'm not sure about 1Password, and as mentioned above, I suspect Cloudflare reported the issue to Okta the same day they were targeted).
The order they posted blog posts about the issue was: Okta, BeyondTrust, Cloudflare, 1Password.
>Security firm BeyondTrust said it discovered the intrusion after an attacker used valid authentication cookies in an attempt to access its Okta account. The attacker could perform “a few confined actions,” but ultimately, BeyondTrust access policy controls stopped the activity and blocked all access to the account. 1Password now becomes the second known Okta customer to be targeted in a follow-on attack.
That seems to say "BeyondTrust was first and 1Password was second".
What sort of best practices are there for monitoring apps for "suspicious behaviour"? I understand some basics, like checking request/error rates on service scopes, etc, but how sophisticated do these things get? Is there any "intelligent" tooling that people use for this kind of anomaly/threat detection where I can just point it at an event stream and it will identify anomalous behaviour, or do i need to consider everything up-front and add rules for what i need to monitor?
I am an armchair layperson with no experience in this, so take this with a grain of salt. First order of business is an audit trail; every operation done on a user's account or data, every change in the system, should be in an audit log containing timestamps, user, etc. Especially things like updating passwords or account details.
Once you have that event stream, you can release your data analysis tooling. A few days / weeks / months of activity gives you a baseline of what's normal, so something that stands out - like idk, mass password or email updates - should trigger alarms.
But that's just armchair hypotheses; is there anyone on here that has experience with audit logs and what to do with them?
That’s a good idea for AI startup — Security ad Service. Just proxy all your requests through a third party service, which will detect anomalies. But if doesn’t, the service is no liable.
Writing a SSO system is very hard. They have very high up time requirements and very high security requirements.
1Password might select a different vendor or self-host an open source or commercial SSO system. Still, there are no perfect answers, and each choice has tradeoffs. Even worse, the new system is not guaranteed to be any better than the old system.
One thing which frustrates me when reading a lot of these posts is a lot of people assume that security is easy, organizations should be perfect, and breaches only occur if an organization or service is terrible. None of these are true.
True, but in this case we are literally dealing with a company that claims they are really good at data security and it is the reason why the users are paying them.
It is one thing to outsource a monitoring solution. But outsourcing your SSO to a third party? They can only guarantee how well 1Password works, they cannot guarantee how well Okta works.
I am also baffled by this. It's not like 1Password doesn't have the chops to roll their own SSO in a secure way specially given that resorting to use Okta puts you just on top of their attack surface.
You're absolutely correct. Identity and SSO are hard and require a high degree of competency to do well, and they're massively impactful when they go wrong.
I think the problem is trust. So the question is, why would 1Password trust Okta?
Well, one example of that is Cloudflare, the first Okta customer reported to be hit by that breach, and they literally sell Cloudflare Access, an Okta replacement.
What's the worst case? My understanding is that everything is encrypted at rest and in transit, therefore even if there's a breach, user passwords would be secure.
When you have root access to the environment where the data is encrypted at rest, there is the possibility that you can exploit that environment to either reveal the encryption keys, learn valuable information about the encryption scheme, or directly decrypt the data through internal applications. For example, you have direct console access to a database, and when you view the data it is encrypted. But if you can steal the credentials of an administrator you may be able to login to an admin portal as an employee and make their SaaS decrypt it for you. Once you nail that down you script it with Bash + Curl and dump the database that way. Several hours later you have a freshly decrypted database to play with.
Also, when you have root access to the environment where the data is encrypted in transit you can intercept one of the handshakess in the front or back-end to decrypt the data in transit. The idea is to impersonate both sides of the connection, then you can write your own certificates and create your own handshake. For example, maybe there is a web server and a caching server. The caching server sends data to a user outside the network with one certificate, and the web server sends data to the caching server with another certificate. If you perform a man-in-the-middle on the web server -> caching server, then you might be able to rewrite the encryption handshake, depending on the security configuration of the back-end. None of this would throw an error on the users browser.
All of this is irrelevant for 1Password, as their architecture has an "account key" component & a "password" component combined to form the full key. 1Password SaaS never knows neither one. 1P SaaS is essentially a plain file host for a fully-encrypted database per-customer (technically per-vault so that different users can share vaults, and thus own keys to the same DB without sharing their own DB).
This is also why I personally picked 1Password (and have used them for ~10 years) out of all options, because their original OPVault format (which I synced via Dropbox as a plain file host) and current SaaS format had these safeguards.
I find the contents of that whitepaper to be interesting and full of valuable information for anyone looking to make a decision on such services.
I do think that it would be technically possible to, eventually, abuse this system. The problem is, these "guarantees" create a misunderstood sense of security. They want you to believe this is "more" secure. But "more secure" than what? Competing cloud platforms, maybe.
What if someone had access to 1Password's "vaults". They would only need to phish the user for two pieces of data; the 1Password password and the account key. Then, without ever having hacked your network, they gain all your passwords.
So anyone with a back door into 1Password gets a trove of accounts that they can phish.
But that’s just a regular phishing attack, and borders on social engineering. It’s even a bit harder than classic phishing because you never input your account key other than at setup of the 1P app.
Your key or password aren’t even sent to 1Password, ever. Logging in is done by computing and sending a cryptographic proof, and both sides validate each other through those cryptographic challenges.
Again, a back door into 1P SaaS in this case is equivalent to a back door into any cloud storage service, e.g.: Google Drive, Dropbox, etc.
> They would only need to phish the user for two pieces of data; the 1Password password and the account key
I think there’s a misunderstanding here. You have an email, an alphanumeric account key and a password. There are 3 user details, effectively. Then, you can add 2FA, either via TOTP or FIDO2/Passkey. So a total of 3 factors: something you have (account key), something you know (password) and something else you have (a 2FA device).
While the encryption might be solid, if you have a security issue in a sensitive area such as the build environment, you cannot have any confidence in the app.
I'm not saying that this is what happened here, just pointing out that there are other risks here which cannot be mitigated by strong encryption.
That extends to any and all applications. Nothing is stopping a local only Keepass installation from being exposed via a supply chain attack to start sending passwords to an attackers server. In fact, I trust 1Password's corporate infrastructure more than my own ability to lock down my personal devices.
But that requires it to be present before the build is signed, ergo, early in the supply chain. Otherwise, the signature breaks and the build isn’t trusted for the rest of it.
Now, will 1Password pull a Lastpass and slowly and silently keep updating their blogpost, each time growing from “a lil bit compromised” to “oopsie we lost all your data through negligence but we care about your privacy we swear”
This is am unwarranted comparison. First, one only need to look at the respective issue reports to see that 1P is much more operationally mature than LP.
More importantly, 1Password's architecture is fundamentally more secure than LastPass' given how password vaults are encrypted with essentially master password + uncrackable random string, vs LastPass' sole use of the master password when generating the encryption key. Not saying there aren't other avenues for attack (e.g. supply chain attacks in the 1P apps), but if 1P reported that there was a big theft of encrypted vaults, I wouldn't even bother changing my passwords, as opposed to what happened with LastPass.
Oh, is that why they removed Wi-Fi sync in 1Password 8?
As a customer since version 4 I'm disappointed they use cloud crap like Okta and Notion. While those have their uses, if there's any company that shouldn't be doing so, 1Password is it.
They print money at -111M loss per quarter; I wouldn't say they are succeeding at that either, technically speaking!
Edit: when time comes rolling their debt at a higher interest rate (post-ZIRP) I expect them to trim down their workforce and start costcutting. They currently have 6k+ employees.
I think we'd be in far healthier society if corporations not turning profit for half a decade would just die instead of entangling more users into their failure. via investment money...
The ability to reuse these extracted cookies is an exfiltration issue. The solution to exfiltration is ensuring the cookie is only valid from a specific client (aka token binding).
Because token binding is just a signature, debugging isn’t crippled, because the content is still readable while the cookies in the HAR files stay safe.
We’ll likely see more session implementations using token binding soon. (probably not mTLS due to the UX)
These systems work using pub/private keys, signing each outgoing request (much like traditional tls). If you were able to get the cookies, you’d be unable to use them from another browser unless you also extracted the private key.
While the TLS-based version was removed from Chrome [0], you can implement a less efficient (and slightly less secure) version in js. I created an example [1].
The gist is:
1. Client creates a key pair in js
2. Client sends pub key on initial auth
3. Server validates auth and attaches pub key to cookie
4. Client signs each request payload with a timestamp
5. Server validates signature and timestamp
The security issue is in step 1. WebCrypto can generate non-extractable private keys and store them in indexeddb. However, this assumes no malicious code flips the “extractable” flag before the pair is generated. So this strategy is trust-on-first-use.
In the case of 1Password its a bit more interesting, since they presumably already have all that sorted because that is literally the product that they're selling, but in most cases it's easier to say "no one in the organisation has access to keys" rather than "no one has access, except Bob, because he runs the SSO server, but we really trust him you know".