This is pretty incredible. These aren't just good practices, they're the fairly bleeding edge best practices.
1. No more SMS and TOTP. FIDO2 tokens only.
2. No more unencrypted network traffic - including DNS, which is such a recent development and they're mandating it. Incredible.
3. Context aware authorization. So not just "can this user access this?" but attestation about device state! That's extremely cutting edge - almost no one does that today.
My hope is that this makes things more accessible. We do all of this today at my company, except where we can't - for example, a lot of our vendors don't offer FIDO2 2FA or webauthn, so we're stuck with TOTP.
I think 3. is very harmful for actual, real-world use of Free Software. If only specific builds of software that are on a vendor-sanctioned allowlist, governed by the signature of a "trusted" party to grant them entry to said list, can meaningfully access networked services, all those who compile their own artifacts (even from completely identical source code) will be excluded from accessing that remote side/service.
Banks and media corporations are doing it today by requiring a vendor-sanctioned Android build/firmware image, attested and allowlisted by Google's SafetyNet (https://developers.google.com/android/reference/com/google/a...), and it will only get worse from here.
Remote attestation really is killing practical software freedom.
Let's note that this very concerning problem is only one if organizations take an allowlist approach to this "context aware authorization" requirement.
Detecting changes — and enforcing escalation in that case — can be enough, e.g. "You always uses Safari on macOS to connect to this restricted service, but now you are using Edge on Windows? Weird. Let's send an email to a relevant person / ask for a MFA confirmation or whatever."
It wasn't I, but this has been an absolute plague on an organization I work with. There are only 3 people, and we all have need to access some accounts but they are personal accounts. Also, the boss travels a lot, often to international destinations. Every time he flies I can almost guarantee we'll face some new nightmare. The worst is "we noticed something is a tiny bit different with you but we won't tell you what it is. We've emailed you a code to the email account that you are also locked out of because something is a tiny bit different with you. Also we're putting a flag on your account so it raises holy hell the next 36 times you log in."
Three people sharing a personal account, with one of them frequently traveling internationally, is such an unusual usage pattern that I'd be really disappointed with a service provider if they _didn't_ flag it for extra verification.
This is the problem with this kind of thing. It just perfectly captures the privilege/shelter of the programmers who come up with these heuristics of “obviously unusual”.
You just described the usage pattern of a pilot with a family, a truck driver, a seaman, etc.
It’s only unusual if your definition of usual is “relatively rich, computer power user”.
Not really. What's the use case there, everyone sharing a Google account?
I travelled a lot for work, and never had issues with account access. Nor did my wife ever have issues related to accounts. We don't share Google accounts though. It sounds like that user has personal accounts being used by three people for business use... Which isn't "A seaman and his family".
> What's the use case there, everyone sharing a Google account?
Yes. Everyone having their own distinct accounts is a property of high computer literacy in the family.
Many of my older extended family members have a single email account shared by a husband and wife. Or in one case the way to email my aunt is to send an email to an account operated by a daughter in a different town. Aunt and daughter are both signed in so the daughter can help with attachments or “emails that go missing”, etc.
> Which isn't "A seaman and his family".
The seaman in this scenario has a smartphone with the email signed in. It’s also signed in on the family computer at home. Both the wife and him send email from it. Maybe a kid does to from a tablet. This isn’t that difficult.
> Many of my older extended family members have a single email account shared by a husband and wife. Or in one case the way to email my aunt is to send an email to an account operated by a daughter in a different town. Aunt and daughter are both signed in so the daughter can help with attachments or “emails that go missing”, etc.
As usual with the "personas" scenarios, people creates their unrealistic scenario (just like when talking about UX or design). These personas you are describing will probably fall back to low-tech methods in most of the cases, they won't fail to take a plane because GMail locked them out due to unusual activity when they are trying to show the ticket QR in the airport. They will just print it (or have someone print it for them) beforehand.
> The seaman in this scenario has a smartphone with the email signed in. It’s also signed in on the family computer at home. Both the wife and him send email from it. Maybe a kid does to from a tablet. This isn’t that difficult.
You just missed to add that they use their shared email to communicate between them by using the "Sent" folder.
To be more realistic, the seaman right after buying his Android phone will create without realizing a new Google account because he doesn't probably know that he could use the email account he is already using at home.
But, enough with made-up examples to prove our own points.
This is amazing. He just spelled out for you in great detail the sort of problems that arise in practice in the real world every day and you dismissed them out of hand as being unrealistic. I think you are far more sheltered and far less experienced than you realize. This sort of attitude is exactly what leads to these sorts of things becoming problems in the first place!
> They will just print it (or have someone print it for them) beforehand.
Yes, they will do that precisely because they do not trust technology to work for them because it frequently does not! I have family members like this. I log in to their accounts on my devices for various reasons. Even worse, I run Linux. We run in to these problems frequently. Spend time helping technically illiterate people with things. While doing so, make a concerted effort to understand why they say or think some of the things that they do.
Edit to add, I find it amusing that you make fun of his seaman example. Almost that exact scenario (in terms of number of devices, shared devices, and locations) is currently the case for two of my relatives. Two! And yet you ridicule it.
It may be true about 10 years ago but I imagine the heuristics to have been improved since; I remember being locked out of my Google account while abroad for a month and all it took was to log back in within my "country of origin".
Or a more recent example: my father forgot to bring his Android phone back abroad which subsequently locked him out of his account/services; had to wipe it for him to get his access back.
The frustrating thing to me is that as a user they don't give us any tools to help ourselves. I would gladly make it a "team" account and login individually if we could. I would gladly do a shared TOTP, or whitelist login locations, or anything like that. Or at least give us the option to accept the risk and disable whatever anomaly detection they are applying. But no, that's not how the software world works anymore. Extreme paternalism mode is the only option as a user.
You're right that would totally work with Google. In our case the boss is quite computer illiterate and trying to get him to use LastPass was hard enough. He will tolerate a lot of pain from getting locked out before he'll be willing to learn TOTP :-(
And for many of the SaaS that we use, TOTP doesn't help you avoid the security lock outs.
Have you considered using the same Proxy or VPN? I work remotely and sometimes access services through a VPN based in the country my coworkers are at specifically to avoid this kind of annoyance.
This is a great idea, although the boss is pretty technically challenged so getting him set up on it might be interesting. It's been extremely difficult just to teach him to use LastPass.
There are also a number of browser extensions which may be easier to set up and use for non-technical folks, for example FoxyProxy seems to offer one. I've never tried any myself, though.
That definitely exacerbates the issue, but I don't think it's fair to claim that the absence of recourse is the _only_ problem. If you have limited cell service, limited connectivity, or limited time, then the account being locked can be a significant burden that completely blocks whatever opportunity you were trying to take advantage of. Note that the response time even for newsworthy account locking events is still on the order of hours to days.
> I think 3. is very harmful for actual, real-world use of Free Software.
It has been a long, slow but steady march in this direction for a while [1]. Eventually we will also bind all network traffic to the individual human(s) responsible. 'Unlicensed' computers will be relics of the past.
In practice, the DoD right now uses something called AppGate, which downloads a script on-demand to check for device compliance, and it supports free software distributions, but the script isn't super sophisticated and relies heavily on being able to detect the OS flavor and assumes you're using the blessed package manager, so right now it only works for Debian and RedHat descended Linux flavors. It basically just goes down a checklist of STIG guidelines where they are practical to actually check, and doesn't go anywhere near the level of expecting you to have a signed bootloader and a TPM or checking that all of the binaries on your device have been signed.
>> 3. Context aware authorization. So not just "can this user access this?" but attestation about device state! That's extremely cutting edge - almost no one does that today.
> I think 3. is very harmful for actual, real-world use of Free Software. If only specific builds of software that are on a vendor-sanctioned allowlist, governed by the signature of a "trusted" party to grant them entry to said list, can meaningfully access networked services, all those who compile their own artifacts (even from completely identical source code) will be excluded from accessing that remote side/service.
Is that really a problem? In practice wouldn't it just mean you can only use employer-provided and certified devices? If they want to provide their employees some Free Software-based client system, that configuration would be on the whitelist.
I think from the viewpoint of a business/enterprise environment, yes you're right, context-aware authorization is a good thing.
But I think the point of your parent comment's reply was that the inevitable adoption of this same techonology in the consumer-level environment is a bad thing. Among other things, it will allow big tech companies to have an stronger grip on what software/platforms are OK to use/not use.
If your employer forces you to, say, only use a certain version of Windows as your OS in order to do your job, that's generally acceptable to most people.
But if your TV streaming provider tells you have to use a certain version of Windows to consume their product, that's not considered acceptable to a good deal of people.
TV streaming services already do as much of this nonsense as they can get away with. The more "secure" of an environment their DRM runs in, the higher resolution the image they'll let you see.
It's just subtle enough (e.g. lower definition but will still play) and most people use "secure" enough setups that only techies, media gurus, or that one guy who's still using a VGA monitor connection end up noticing
I think browser-based streaming is the only scenario impacted. Apps can already interrogate their platform and make play/no play decisions.
They are also already limiting (weakly) the max number of devices that can playback which requires some level of device identification, just not at the confidence required for authentication.
Well, the fact that I can't do credit card payments for some banks if I don't have an iphone or non rooted, google android phone is a problem which already exists.
Worse supposedly this is for security, but attackers which pulled of a privilege escalation tend to have enough ways to make sure that non of this detection finds them.
In the end it just makes sure you can't mess with your own credit card 2FA process by not allowing you to control the device you own.
This should be obvious from your comment but I think it's worth calling something out explicitly here: a bank that does that is mandating that you accept either Apple's or Google's terms of service. That's a lot of power to give to two huge companies.
I think we'd do well to provide the option to use open protocols when possible, to avoid further entrenching the Apple/Google duopoly.
This 10000x! A bank account is sure as hell more of a utility than a landline!
You need a bank account to do basically anything and yet consumer banking is largely unregulated (in the consumer relation sense, they are regulated on the economic side of course). Payments take upwards of 24h and only during work hours (?!?), there are no "easy switch" rewuirements, mobile apps use shit like SafetyNet and I've had banks legit tell me "just buy a phone from this list of manufacturers"... PSD2 is trash that only covers B2B interoperability and mandates a security method that has been known as broken since its invention (SMS 2FA).
I think we'd do well to provide the option to use open protocols when possible.
Of course, the PR copy just writes itself, doesn't it? AD administrators, Apple and Google, banks and everyone else can benefit from context aware authorization.
If the state of your phone is stolen or "compromised", you want immediate Peace of Mind.
Even if it's just misplaced, having that kind of flexibility is just great.
It's about the EU mandated 2FA auth for online shopping.
E.g. with an credit card.
Due to the way it integrates into websites (or more specifically doesn't) classical approaches like SMS 2FA (insecure anyway) but also TOTP or FIDO2 do not work.
Instead a notification is send to a preconfigured app where you then confirm it.
Furthermore as the app and payment might be on the same device the app uses the fingerprint reader/(probably some Google TPM/secrets API idk.).
Theoretically other approaches should work, but practically they tend to not work reliable or at all in most situations.
Technically web based solutions could be possible by combining a FIDO stick with browser based push notifications, practicality they (Banks) bother or there are legal anoyences.
In the future banks may start accepting connections only from a handful of approved browsers. Similarly to how 4k steaming on Netflix is not available on all browsers.
> but attackers which pulled of a privilege escalation tend to have enough ways to make sure that non of this detection finds them
The point of these restrictions is to ensure that your device isn't unusually vulnerable to privilege escalation in the first place. If you let them, some users will root their phone, disable all protections, install an malware-filled Fortnite apk from a random website then stick their credit card company with the bill for fraud when their user-mangled system fails to secure their secrets.
You want to mod the shit out of your Android phone? Go ahead. Just don't expect other companies to deal with your shit, they're not obligated to deal with whatever insecure garbage you turn your phone into.
In practice, many credit unions/banks will only support recent versions of major desktop browsers (ie. the big three: Chrome, Firefox, Safari) which are known to mandate a good level of security. These browsers will usually have their own OS requirements. For eg Safari is tied to macOS versions directly while Chrome will drop support for older unmaintained operating systems like Windows XP.
Any system can have malware. That's not the point. To repeat my point again: client restrictions are about making sure user devices are not unusually vulnerable to malware. For example, any Windows device may be infected with malware, but if you're still running Windows XP you're vulnerable to a much larger variety of known malware and more severe exploits. Hence why businesses will want to support only modern versions of eg Chrome which itself will require modern versions of operating systems.
So require I have an up to date browser on my phone. Don't require that I haven't rooted it when every desktop is in an equivalent security state. That's not enough to be "unusually vulnerable".
I'm not asking to use a 10 year old version of android that no modern browsers support any more and is missing many security features.
So what if the desktop is in a worse state? Mobile is still a common threat surface that supports stronger security measures. Unusual is relative, mobile is much more secure by default. It makes no sense to weaken the security posture for mobile users just because the desktop/web doesn't allow a stronger one.
I guess you also think Android/iOS should just get rid of app permissions because users could just use similar software on their desktops without any permissions gating?
Edit: Android/iOS are increasingly popular platforms, the security they pioneer far exceeds their desktop predecessors and has improved the average security posture of millions of mobile-focused users.
> It makes no sense to weaken the security posture for mobile users just because the desktop/web doesn't allow a stronger one.
The motivation is not "just" that, or for fun, the motivation is that users should be allowed to control their own devices. And have them keep working.
> I guess you also think Android/iOS should just get rid of app permissions because users could just use similar software on their desktops without any permissions gating?
I want it to work... exactly like app permissions. Where if I root it, I can override things.
> Android/iOS are increasingly popular platforms, the security they pioneer far exceeds their desktop predecessors and has improved the average security posture of millions of mobile-focused users
Having that kind of sysadmin lockdown is useful, but if I want to be my own sysadmin I shouldn't be blacklisted by banks.
As a side note the attack scenario you describe works without needing any rooting or anything it already exists and isn't detected by their security mechanism.
Also this is about the second factor in 2FA not online banking.
Which you can do on a completely messed up computer.
I'm also not asking to be able to do pay contactless with a degoogled Android phone.
Similar I'm but asking to not have 2FA, you can use stuff like a FIDO stick with your phone.
Most of this "security" features are often about Banks pretending to have proper 2FA without a second device... (And then applying them to other apps they produce, too).
> As a side note the attack scenario you describe works without needing any rooting or anything it already exists and isn't detected by their security mechanism.
Android will block non-Play-Store app installations by default, and root is required for lower level access/capabilities that can bypass the normal sandbox.
I'm honestly not sure what you're saying about 2FA in the rest of your comment, it's kind of vague and there are some possible typos/grammar issues that confuse me. What exactly are you referring to when you say "pretending to have proper 2FA"?
No, you basically have to click on ok once (or change a setting, depending on phone), either way it doesn't require root, and doesn't really change the attack scenario as it's based one someone intentionally installing an app from an arbitrary not-trusted source.
> root is required
Yeah, like privilege escalation attacks. As you will likely find in many compromised apps. And which on many Android phones work due to vendors not providing updates after some time. And many other reasons.
> What exactly are you referring to when you say "pretending to have proper 2FA"?
EU law says they need to provide 2FA for only banking.
Banks often don't do that for banking apps as it's inconvenient. Instead they "split the banking app in two parts" and maybe throw some finger pint based auth mechanism in and claim they have proper 2FA auth. (Because it's two app processes running and requires the fingerprint.) Through repeatedly security researchers have shown that its not a good idea.
Additionally they then require you to only use your fingerprint, not an additional password....
Either way, the point is that secure online banking doesn't requires locked down devices in general.
Only on Android is it so simple to sideload, and even then there are lower level app capabilities that require root even for sideloaded apps.
Good security is layered. Just because privilege escalation attacks are sometimes possible without root doesn't mean you throw open the floodgates and ignore the threat of root. The point of banning rooted devices is that privilege escalation attacks are much easier in rooted devices.
Of course online banking doesn't require locked down devices, but online banking is more secure in locked down devices. I don't see why banks should weaken their security posture on root just because they aren't perfect in other areas.
You're right, many secure apps don't go far enough in blocking Android releases that are probably too old & vulnerable. Not all apps are perfect, but blocking rooted and ancient devices is a start.
No, it's starting at the wrong end and not in any relevant way provide an improvement.
Checking for an too old & vulnerable is where you start.
And then you can consider to maybe also block other stuff.
There is nothing inherently less secure about an rooted device.
Sure you can make it less secure if you install bad software, but you can also make it more secure.
Or you just need to lower the minimal screen brightness for accessibility reasons.
Your claiming it's ok to take the agency from people away to decide over a major part of their live (which sadly phones are today) because maybe they could act irresponsible and do something stupid.
But if we say that is ok, then we first need to start to ban cars, because you could drive into a wall with it, and knifes, also no way to have a bath tube you could drown yourself.
And yes that is sarcastic, but there is a big difference between something being "inherently insecure" (driving without belt) or by default is in no way less secure as long as you don't go actively out of your way to make it less secure (by e.g. disabling security protections).
> There is nothing inherently less secure about an rooted device.
This is clearly wrong, rooted devices are much more insecure because they enable low level access to maliciously alter the system. Malware often requires root and will first try to attempt to attain root, which of course isn't necessary if a user has manually unlocked root themselves.
> Your claiming it's ok to take the agency from people away to decide over a major part of their live (which sadly phones are today) because maybe they could act irresponsible and do something stupid.
No one is taking away any user's agency. Users are free to root their phones if they wish (many Android phones at least will allow it), but companies are also free to deny these users service. Users are free to avail themselves of any company's service on a non-rooted phone. "Not using rooted phones to access anything you like" is hardly a major loss of agency.
Phone insecurity is very dangerous IMO, much more dangerous really than bathtubs or perhaps knives. You could argue that vehicles are similarly very dangerous and I'd agree. I don't think we're very far off from locked down self-driving cars. Unfortunately we're not there yet with self-driving tech and the current utility of vehicles still outweighs their immense safety risks. You can't really say that about rooted phones. The legitimate benefits of a rooted phone are largely relevant to developers, not the average user, and most users never attempt to tinker with their phone.
It does seem like a privilege escalation in the reverse direction, allowing banks to escalate a decision about security into one about devices. They should not have that power, and it's far from the only solution.
Cars come with AndroidAuto (and whatever is for iOS). Only apps signed by Google can communicate with AndroidAuto. I don't want to use a Google phone or app to display OSM on my car's media screen. Why is this legal?
Once you're talking about interactive information displays in cars that can be accessed while the vehicle is in motion, traffic and highway safety regulations start cropping up. When you ask "Why is this legal," try rephrasing it to, "Why is it legal for companies to make it so difficult to play Doom on my BMW's touch screen," and you will probably arrive at the answer.
> But I think the point of your parent comment's reply was that the inevitable adoption of this same techonology in the consumer-level environment is a bad thing.
And this has happened before, with Intel ME that was and still is useful if you have a fleet of servers to manage but a hell of a security hole outside of corporate world.
And now that Windows 11 all but requires a working TPM to install (although there are ways to bypass it for now), I would not be surprised if Netflix and the rest of the content MAFIAA would follow their Android approach and demand that the user have Secure Boot enabled, only Microsoft-certified kernel drivers loaded and the decryption running in an OS-secured sandbox that even a Local Administrator-level account can access.
>But if your TV streaming provider tells you have to use a certain version of Windows to consume their product, that's not considered acceptable to a good deal of people.
This is already the case with Netflix -- 4k video content cannot be played on Linux.
> Is that really a problem? In practice wouldn't it just mean you can only use employer-provided and certified devices?
That's fine for employees doing work for their employers. It's not fine for personal computing on personal devices that have to be able to communicate with a wide variety of other computers belonging to a wide variety of others, ranging from organizations like banks to other individuals.
> Is that really a problem? In practice wouldn't it just mean you can only use employer-provided and certified devices?
Depends what you think big corporations' centrally managed IT equipment is like.
Theoretically, it could mean you get precisely the right tools to do your job, with the ideal maintenance and configuration provided effortlessly.
But for some organisations, it means mandatory Internet Explorer and Flash for compatibility with some decrepit intranet, crapware like McAfee that slow the system to a crawl, baffling policies like not letting you use an adblocker, and regular slow-to-install but unavoidable updates that always happen just as you're giving that big presentation.
I'd feel 100% differently about this stuff if the NSA or some other cybersecurity gov arm making these rules used their massive cybersecurity budgets to provide free MFA, TLS, encrypted DNS, etc., whether US gov hosted or via non-profit (?) partners like LetsEncrypt.
OSS & free software otherwise has a huge vendor tax to actually get used. As is, this feels like economic insecurity & anti-competition via continued centralization to a small number of megavendors. Rules like this should come with money, and not to primes & incumbents, but utility providers.
Sure, our team is internally investing in building out a lot of this stuff, but we have security devs & experience, while the long tail of software folks use doesn't. The gov sets aside so much $$$$ for the perpetual cyber war going on, but not for simple universal basics here :(
> A remote cryptographically-signed attestation is not reproducible
No one wants to reproduce an attestation. If you could, it could be copied, and if you can copy an attestation any hardware could send it to prove it was something else - something the other end trusts, rendering is useless for it's intended purpose.
However, the attestation is attesting the hardware you are running on is indeed "reproduced", as in it is a reliable copy of something the other end trusts. It could be a device from Yubi Key and in effect you are trusting Yubi Corp's word on the matter. Or, it could be an open source design everybody can inspect, reproducibly rendered in hardware and firmware. Personally, I think trusting the former is madness, as is trusting the latter without a reproducible build.
I don't know much about attestation, but the repro builds folks have an approach for dealing with signatures; you build once, then copy the signature into the source, so that as long as the unsigned build result is bit-identical, the signatures still match and anyone can reproduce the signed build result.
I wish I had responded earlier, because now this entire thread is full of nonsense and I can't really respond to everything.
But attestation can mean a lot of things and isn't inherently in conflict with free software. For example, at my company we validate that laptops follow our corporate policy, which includes a default-deny app installation policy. Free software would only, in theory, need a digital signature so that we could add that to our allowlist.
> For example, at my company we validate that laptops follow our corporate policy, which includes a default-deny app installation policy.
Presumably (hopefully) these are corporate-owned devices, with a policy like that. Remote attestation is fine if it's controlled by the device's owner, and you can certainly run free software on such a device, if that particular build of the software has been "blessed" by the corporation. However, the user doesn't get the freedoms which are supposed to come with free software; in particular, they can't build and run a modified version without first obtaining someone else's approval. At the very least it suggests a certain lack of respect for your employees to lock down the tools they are required to use for their job to this extent.
It "should" suffice, but entities like banks and media companies are already going beyond this. As the parent points out, many financial and media apps on Android will just simply not work if the OS build is not signed by a manufacturer on Google's list. Build your own Android ROM (or even use a build of one of the popular alternative ROMs) and you lose access to all those apps.
The clearer way to put this is: when faced with a regulatory requirement, most of the market will choose whatever pre-packaged solution most easily satisfies the requirement.
In the case of client attestation, this is how we get "Let Google/Apple/Microsoft handle that, and use what they produce."
And as a end state, leads to a world where large, for-profit companies provide the only whitelisted solutions, because they're the largest user bases and offer a turn-key feature, and the market doesn't want to do addition custom work to support alternatives.
In my experience, it isn’t that companies don’t want to put in the work, it’s that some middle manager made a decision. I’ve been told to “implement login with X” more than once in my career and when asked what about Y or Z, they say, “we only want X” with no further explanation.
For something like LineageOS, ironically, the solution is to root your device to adjust build properties so it looks signed.
My vanilla LineageOS install fails but I can root with Magisk, enable Zygisk to inject code into Android, edit build properties, add SafetyNet fix and now my device is good to go?
It's crazy to think the workaround is "enable arbitrary code injection" (Zygisk)
Yeah, that's the crazy thing: that this entire "verification" house of cards can be so easily defeated by just faking the response to an API call from code that you can control (after unlocking your bootloader and installing your own code). I guess this is why there is a push to stop allowing bootloaders to be unlocked.
Even locked bootloaders only help a little. Afaik all iOS devices have locked bootloaders but that doesn't stop jailbreaking. I imagine Android, with spotty vendor support track record, would be even easier
This, or we could have dual booting that's relatively as easy to do on mobile as it is on PCs.
Currently, you'd have to do find an unlocked phone, hope there is a downloadable factory image, re-flash, re-lock, re-install to run whatever needs attestation. Potentially using something like Android's DSU feature, this could all be a click or two, and you could be back running Lineage with a restart.
I mean... no thanks? I remember dual-booting Windows and Linux (and macOS and Linux) for years back in the 00s, and it was inconvenient and annoying. I don't want to go back to that, even (especially?) on a phone.
Dual booting isn't so bad, I've almost always had a gaming partition somewhere, while my current install doesn't even run 32-bit binaries. That said, attestation should be possible with user-locked bootloaders, not just vender-locked bootloaders. I suppose Magisk provides something close to this currently with bootloaders that can't be re-locked for custom roms, so more power to it.
I’m not even so sure I’m totally against banks doing that either.
From where I sit right now, I have within arms reach my MacBook, a Win11 Thinkpad, a half a dozen Raspberry Pis (including a 400), 2 iPhones only one of which is rooted, an iPad (unrooted) a Pinebook, a Pine Phone, and 4 Samsung phones one with its stock Android7 EOLed final update and three rooted/jailbroken with various Lineage versions. I have way way more devices running open source OSen than unmolested Apple/Microsoft/Google(+Samsung) provided Software.
My unrooted iPhone is the only one of them I trust to have my banking app/creds on.
I’d be a bit pissed if Netflix took my money but didn’t run where I wanted it, but they might be already, I only ever really use it on my AppleTV and my iPad. I expect I’d be able to use it on my MacBook and thinkpad, but could be disappointed, I’d be a bit surprised if it ran on any of my other devices listed…
Putting a banking app on your pocket surveillance device is one of the least secure things you can do. What happens if you're mugged, forced to login to your account, and then based on your balance it escalates to a kidnapping or class resentment beatdown? Furthermore, what happens if the muggers force you to transfer money and your bank refuses to roll back as unauthorized because their snake oil systems show that everything was "secure" ?
> I’m not even so sure I’m totally against banks doing that either.
The hole in this reasoning is that you don't need the app; you can just sign into the bank's website from the mobile browser, and get all the same functionality you'd get from the app. (Maybe you don't get a few things, like mobile check deposits, since they just don't build features like that into websites for the most part.) The experience will sometimes be worse than that of the app, but you can still do all the potentially-dangerous things without it. So why bother locking down the app when the web browser can do all the same things?
> I’d be a bit pissed if Netflix took my money but didn’t run where I wanted it
I actually canceled my HBO Max account when, during the HBO Now -> HBO Max transition, they somehow broke playback on Linux desktop browsers. When I wrote in to support, they claimed it was never supported, so they weren't obligated to care. I canceled on the spot.
Reproducible builds are a thing, I don't know how widespread they are. I know the monero project has that built in so everyone compiles the exact same executable regardless of environment, and can verify the hash against the official version https://github.com/monero-project/monero
Reproducible builds allow the user of the software to verify the version that they are using or installing. They do not, by themselves, allow the sort of remote attestation which would permit a service to verify the context for authentication—the user, or a malicious actor, could simply modify the device to lie about the software being run.
Secure attestation about device state requires something akin to Secure Boot (with a TPM), and in the context of a BYOD environment precludes the device owner having full control of their own hardware. Obviously this is not an issue if the organization only permits access to its services from devices it owns, but no organization should have that level of control over devices owned by employees, vendors, customers, or anyone else who requires access to the organization's services.
> no organization should have that level of control over devices owned by employees, vendors, customers, or anyone else who requires access to the organization's services.
It seems like the sensible rule of thumb is: If your organization needs that level of control, it's on your organization to provide the device.
Or we could better adopt secure/confidential computing enclaves. This would allow the organization to have control over the silo'd apps and validate some degree of security (code tampering, memory encryption, etc) but not need to trust that other apps on the device or even the OS weren't compromised.
I'm uncomfortable letting organisations have control over the software that runs on my hardware. (Or, really, any hardware I'm compelled to use.)
Suppose the course I've been studying for the past three years now uses $VideoService, but $VideoService uses remote attestation and gates the videos behind a retinal scan, ten distinct fingerprints, the last year's GPS history and the entire contents of my hard drive?¹ If I could spoof the traffic to $VideoService, I could get the video anyway, but every request is signed by the secure enclave. (I can't get the video off somebody else, because it uses the webcam to identify when a camera-like object is pointed at the screen. They can't bypass that, because of the remote attestation.)
If I don't have ten fingers, and I'm required to scan ten fingerprints to continue, and I can't send fake data because my computer has betrayed me, what recourse is there?
¹: exaggeration; no real-world company has quite these requirements, to my knowledge
1) The requirements themselves. These are different for consumer vs employee type scenarios. So general, I'd prefer we err on the side of DRM free for things like media, but there are legitimate concerns around things like data privacy when you are an employee of an organization handling sensitive data.
2) Presuming there are legitimate reasons to have strong validation of the user and untampered software, we have the choice of A) using only organization supplied hardware in those case or B) using your own with some kind of restriction. I'd much prefer to use my own as much as possible ... if I can be ensured that it won't spy on me, or limit what I can do, for the non-organization specific purposes I've explicitly opted-in to enable.
> I'm uncomfortable letting organisations have control over the software that runs on my hardware.
I'm not, if we can sandbox. I'm fine with organizations running javascript in my browser for instance. Or running mobile apps that can access certain data with explicit permissions (like granting access to my photos so that I can share them in-app). I think we can do better with both more granular permissions, better UX, and cryptographic guarantees to both the user and the organization that both the computation and data is operating at the agreed level.
Secure enclaves are still dependent on someone other than the owner (usually the manufacturer) having ultimate control over the device. Otherwise the relying party has no reason to believe that the enclave is secure.
Let me elaborate on the problem I do have with remote attestation, no matter if I can verify that the signed binary is identical with something I can build on my own.
I use LineageOS on my phone, and do not have Google Play Services installed. The phone only meaningfully interacts with a very few and most basic Google services, like an HTTP server for captive portal detection on Wifi networks, an NTP server for setting the clock, etc. All other "high-level" services that I am aware of, like Mail, Calendaring, Contacts, Phone, Instant Messaging, etc., are either provided by other parties that I feel more comfortable with, or that I actually host myself.
Now let's assume that I would want or have to do online/mobile banking on my phone - that will generally only work with the proprietary app my bank provides me with. Even if I choose to install their unmodified APK, (any lack of) SafetyNet will not attest my LineageOS-powered phone as "kosher" (or "safe and secure", or "healthy", or whatever Google prefers calling it these days), and might refuse to work. As a consequence, I'm effectively unable to interact via the remote service provided by my bank, because they believe they've got to protect me from the OS/firmware build that I personally chose to use.
Sure, "just access their website via the browser, and do your banking on their website instead!", you might say, and you'd be right for now. But with remote attestation broadly available, what prevents anyone from also using that for the browser app on my phone, esp. since browser security is deemed so critical these days? I happen to use Firefox from F-Droid, and I doubt any hypothetical future SafetyNet attestation routine will have it pass with the same flying colors that Google's own Chrome from the Play Store would. I'm also certain that "Honest c0l0's Own Build of Firefox for Android" wouldn't get the SafetyNet seal of approval either, and with that I'd be effectively shut off from interacting with my bank account from my mobile phone altogether. The only option I'd have is to revert back to a "trusted", "healthy" phone with a manufacturer-provided bootloader, firmware image, and the mandatory selection of factory-installed, non-removable crapware that I am never going to use and/or (personally) trust that's probably exfiltrating my personal data to some unknown third parties, sanctified by some few hundreds of pages of EULA and "Privacy" Policy.
With app stores on all mainstream and commercially successful desktop OSes, the recent Windows 11 "security and safety"-related "advances" Microsoft introduced by (as of today, apparently still mildly) requiring TPM support, and supplying manufacturers with "secure enclave"-style add-on chips of their own design ("Pluton", see https://www.techradar.com/news/microsofts-new-security-chip-...), I can see this happening to desktop computing as well. Then I can probably still compile all the software I want on my admittedly fringe GNU/Linux system (or let the Debian project compile it for me), but it won't matter much - because any interaction with the "real" part of the world online that isn't made by and for software freedom enthusiasts/zealots will refuse to interact with the non-allowlisted software builds on my machine.
It's going to be the future NoTCPA et al. used to combat in the early 00s, and I really do dread it.
I hadn't thought about extending this attestation to the browser build as a way to lock down web banking access. That's truly scary, as my desktop Linux build of Firefox might not qualify, if this sort of thing would come to pass.
Wow, the monero project looks like they have some great ideas. I like this reproducible build - may try to get my team to work towards that.
It seems like monero has more of a focus on use as a real currency, so hopefully it isn't drawing in the speculative people and maintains it's real use.
> If only specific builds of software that are on a vendor-sanctioned allowlist
Yes, but for government software this is a bog-standard approach. Not even "the source code is publicly viewable to everyone" is sufficient scrutiny to pass government security muster; specific code is what gets cleared, and modifications to that code must also be cleared.
> I think 3. is very harmful for actual, real-world use of Free Software.
I hold the reverse view. The only security token I'd trust is the only thing that isn't open is the private keys the device generates when you press the reset button. The rest meaning from the CPU up (say RISC-V) and the firmware must be open to inspection by anybody. In fact, it should also be easy to peel away the silicon protection so you can see everything bar the cells storing the private keys. The other non-negotiable is the thing that computes and transmits the "measures" of the system being attested to (including it’s own firmware) can not be changed - meaning no stinking "security" patches are allowed at that level. If it's found broken, throw it away as the attestation is useless.
The attestation then becomes the device you hold is faithful rendering / compiling of open source design document X by open source compiler Y. And I can prove that myself, by doing building X using Y and verifying the end result looks like the device I hold. This process is also known as reproducible builds.
What we have now (eg, YubiKeys) is not that. Therefore I have to trust Yubi Corp. To see what that's a problem, see the title of this story. It has the words "Zero-Trust" in it.
In reality of course there is no such thing as "Zero-Trust". I will never be able to verify everything myself, ergo I have to trust something. The point is there is a world of difference between trusting an opaque black box like Yubi Corp, and trusting an open source reproducible build, where a cast of random thousands can crawl over it and say, "it seems OK to me". In reality it's not the ones that say "it seems OK" you are trusting. You are trusting the mass media (places like this in other words), to pick up and amplify the one voice among millions that says "I've found a bug - and because it's open I can prove it" so everyone hears it.
So to me it looks to be the reverse of what you say. Remote attestation won't kill software freedom. Remote attestation, done in a way that we can trust, must be built using open source. Anything less simply won’t work.
>Remote attestation really is killing practical software freedom.
Which will continue marching forward without pro-user legislation. Which is extraordinarly unlikely to happen since the government has vested interest in this development.
"Software freedom" doesn't really make sense when the software's function is "using someone else's software". You're stil at the mercy of the server (which is why remote attestation is even interesting in the first place).
If you want to use free software, only commect to Affero GPL servces and don't use nonfree services, and don't consume nonfree content.
You're usually only at the mercy of the server because you don't control the client; a free YouTube client like VLC remains useful. A free Microsoft Teams client would be useful. I allege that free VNC clients are also useful, even if there's non-free software on the other end.
How could a build be verified to be the same code without some kind of signature? You cant just validate a SHA, that could be faked from a client.
If you want to get a package that is in the Arch core/ repo, doesnt that require a form of attestation?
I just don’t see a slippery slope towards dropping support for unofficial clients, we’re already at the bottom where they are generally and actively rejected for various reasons.
Still, the Android case is admittedly disturbing, it feels a lot more personal to be forced to use certain OS builds; that goes beyond the scope of how I would define a client.
> How could a build be verified to be the same code without some kind of signature? You cant just validate a SHA, that could be faked from a client.
This depends on how far down the rabbit hole you want to go, if it was secureboot, only signed processes can run, would that make you feel better ? If it doesn't.. what would ?
That’s fair. Its just to say there is a lot of context for client verification in software. Competitive multiplayer gaming has become an arms races of exploits and invasive anti-cheat measures; there is no concept of bring-your-own-client when there is money on the line.
Valve has taken a less heavy-handed approach and let users have more freedom over their client and UI, but they also have a massive bot problem in titles like TF2.
I can’t connect to my work network from a random client, and it will throw flags and eventually block me if I connect with an out-of-date OS version.
I can’t present any piece of paper with my banking data and a signature on it and expect other parties to accept it. I have to present it on an authorized document.
Totally locking down a computer to just a pre-approved set of software is a huge step towards securing it from the kind of attackers most individuals, companies, and governments are concerned with. Sacrificing "software freedom" for that kind of security is a trade off that the vast majority of users will be willing to make - and I think the free software community will need to come to terms with that fact at some point and figure out what they want to do about it.
> Totally locking down a computer to just a pre-approved set of software is a huge step towards securing it from the kind of attackers most individuals, companies, and governments are concerned with.
No, it isn't. It's a way for corporations and governments to restrict what people can do with their devices. That makes sense if you're an employee of the corporation or the government, since organizations can reasonably expect to restrict what their employees can do with devices they use for work, and I would be fine with using a separate device for my work than for my personal computing (in fact that's what I do now). But many scenarios are not like that: for example, me connecting with my bank's website. It's not reasonable or realistic to expect that to be limited to a limited set of pre-approved software.
The correct way to deal with untrusted software on the client is to just...not trust the software on the client. Which means you need to verify the user by some means that does not require trusting the software on the client. That is perfectly in line with the "zero trust" model advocated by this memo.
Wrong. 80% of attacks are social engineering ones. In which an employee is convinced to make a bank transfer, open some document, install some program. From there, often times it's exploiting wide spread software commonly found in large organizations.
Everything you said cannot be further from the truth.
Hence the pre-approved software restrictions. In a locked down system, even the most gullible employee won't have the authorization to "install some program".
I'd also hope that businesses care about more than 80% of attacks, preferably they should care about 100% of attacks. Hence, pre-approved software restrictions.
If you can argue that remote attestation doesn't provide additional security, then i'd love to hear that argument. but it seems like a fairly clear-cut case that it does provide additional security, and i don't think it's reasonable to accept a lower level of security for the sake of allowing unverified builds of open-source software.
there are specific contexts where you want to distribute information as widely as possible, and in those contexts it makes sense to allow any software versions to access the information. but for contexts where security is important, that means verifying the client software isn't compromised.
It can go pretty terribly sideways just like antivirus with poorly coded, proprietary, privileged agents running on end user devices collecting data.
I worked at a place that only allowed "verified" software before and it's an ongoing battle to keep that list updated. Things like digital signatures can be pretty reliable but if you're version pinning you can make it extremely difficult to quickly adopt patched versions when a vulnerability comes out.
Also, “Password policies must not require use of special characters or regular rotation.”
They even call out the fact that it's a proven bad practice that leads to weaker passwords - and such policies must be gone from government systems in 1 year from publication of the memo. It's delightful.
Somewhat unrelated, but hopefully this also means TreasuryDirect will get rid of its archaic graphical keyboard that disables the usage of password managers.
(Graphical keyboards are an old technique to try to defeat key loggers. A frequent side effect of a site using a graphical keyboard is that the developer has to make the password input field un-editable directly, which prevents password managers from working, unless you use a user script to make the field editable again.)
Just saying in this in case it will help you. For treasurydirect, you can use inspect element and change the value="" field on the password element, and paste in your password from your password manager. It's not as convenient as autofill from your password manager, but it sure beats using the graphical keyboard.
Thanks! That would definitely be a way to do it. I was hinting at something similar by saying "unless you use a user script to make the field editable again". You could also run a bookmarklet that makes the input editable using JavaScript, and then using the password manager. But it's a pain in any case if you're using the site on a mobile device.
Yeah, while the clear and correct focus overall is on moving away from passwords entirely (FINALLY!!!!!) it's still nice to see something immediately actionable on at least improving policies in the mean time since those should be very low hanging fruit. Although one thing one thing I don't see is a mention of is doing away with (or close enough) password max character limits and requiring that everything get hashed step 1. Along with rotation and silly complex rules, stupid low character limits is the other big irritation with common systems. If passwords must be used they should be getting hashed client-side anyway (and then again server-side) so the server should be getting a constant set of bits no matter what the user is inputting. There isn't really any need at all for character limits at this point. If anything it's the opposite, minimums should be a lot higher. If someone is forced to use at least 20-30 characters say that essentially requires a password manager or diceware. And sheer length helps even bad practices.
But maybe they didn't bother giving much more effort to better passwords because they really don't want those to stick around at all and good for them. Password managers themselves are a bandaid on the fundamentally bad practice of using a symmetric factor for authentication.
I am a bit concerned that this will be read as "Password policies must require the use of no special characters", possibly as a misguided attempt to push people away from adding using "Password123!" as the password. I wish the memo had spelled out a little more clearly that there's nothing wrong with special characters, but they shouldn't be required. Also, is a whitespace a special character?
But if a whitespace is not a special character (or punctuation) we can add entropy with "Jupiter is the smallest planet!" while still being human readable and using the passphrase paradigm.
They accepted edits via pull request when that was in the works! [1] Such a better model of giving feedback or suggesting edits compared to sending in a marked-up PDF.
SHOULD NOT and MUST NOT are very different from a compliance perspective.
The former usually means something between nothing at all and “you can do it but you have to write paperwork that no one will actually read in detail, but someone will maybe check the existence of, if you do”.
The latter means “do it and you are noncompliant”.
https://datatracker.ietf.org/doc/html/rfc2119 is a good reference, although those precise definitions may or many not be in effect in any particular situation (including this one).
Thanks for pointing out the improvement over NIST, it wasn’t clear to me. But did you mean to reply to my parent? Both the draft and the current language say SHOULD NOT. I’d rather “must”, but will settle for “should”; the NIST docs have certainly made my work easier. Hopefully NIST improves, and perhaps this memo will help!
The essential purpose of my comment was only to correct my parent on the date.
I worked for the govn't ~20 year ago - in IT - and even I hated our password policies. I just kept iterating the same password because we had to change it every 6 weeks.
By the time we implement any of these things, if ever, they certainly won't be. I work on military networks and applications, and it's hard for me to believe that I'll see any of this within my career at the pace we move. This is the land of web applications that only work with Internet Explorer, ActiveX, Siverlight, Flash, and Java Applets, plus servers running Linux 2.6 or Windows Server 2012.
The idea of "Just-in-Time" access control where "a user is granted access to a resource only while she needs it, and that access is revoked when she is done" is terrifying when it takes weeks or months to get action on support tickets that I submit (where the action is simple, and I tee it up with a detailed description of whatever I need done).
It took us NINE MONTHS to get a server installed in a data center a few years back. This was Marine-Corps fielded hardware running an ATO'd[1] software stack for real-world situational awareness, going into a Marine Corps data center. The people that run the data center have a glacial Change Management process, exacerbated by everyone in their organization not talking to each other, even though they are separated by cubical walls.
I too have no faith of seeing this stuff implemented anytime soon...
[1] (Authority to Operate, basically approval from the highest IT authorities to utilize something on a DoD network)
I prefer to keep my email as dumbly secured as possible. I’ll never forget this one time I was on my sailboat with no cell service and only an open WiFi connection from shore. I couldn’t login to anything via sms auth. Same thing with FIDO keys when traveling. Lost luggage? No logging in for you until you get home to get your backup? Cut your finger while cooking? No logging in for you! Have to wear a face mask? No logging in for you!
To be clear, I don’t have a better solution. But all the second factor stuff is fundamentally broke when you are likely to need access to the service most.
So, I'm somebody who thinks about contingencies a lot and to my mind, there's just not a lot of gap between situations where you don't need credentials (e.g. satellite beacons don't care who you are) and where you don't have credentials (oh no, I lost all the gear) so it doesn't represent a big worry for me.
I don't want the consular officials to be unable to authenticate me in a foreign country because I lost my phone, or for my bank to be unable to release funds because I don't have their card or my Security Key, but I feel 100% OK with losing access to Gmail or Hacker News, or whatever for say a few days until I can secure replacement credentials.
The SMS point is part of why moving to FIDO keys is a good idea - the NFC enabled ones work with modern smartphones and allow you to login.
As for lost luggage, I carry mine on my keychain, another one in the laptop itself (USB-C Yubikey) and one in my safe at home - if all three are ever destroyed or lost I also have backup codes available as password protected notes on several devices.
We've been building to these goals at bastionzero so I've been living it everyday, but I feels validating and also really strange to see the federal government actually get it.
Force banks to do this, immediately. They can levy it on any organization with a banking license or wants access to FEDWire or the ACH system. Force it for SWIFT access too, if the bank has an online banking system for users.
I asked my bank about their 16 character limit on password length because it suggests they are saving the password rather than some kind of hash. Their response - don't worry about it, you aren't responsible for fraud.
Banks aren't going to want to implement any changes that cost more (in system changes and customer support) than the fraud they prevent.
These are strong requirements, but I fear the government just wants more transparency of citizens. Remote-attestation of trusted platforms could lead to the worst surveillance attempts we have ever seen. And it would require you to trust your government. That is a bad idea from a security point of view.
edit: The source of my claim that governments tend to extend surveillance is pretty well documented I believe. So much so that I believe it is worthy to insert the problem into debates about anything relating to security. Because security often serves as the raison d'être for such ambitions.
It's very phishable. Attackers will send text messages to your users saying "Hi, this is Steve with the FooCorp Security Team; we're sorry for the inconvenience, but we're verifying everyone's authentication. Can you please reply with the code on your phone?"
It's even worse with texted codes because it's inherently credible in the moment because the message knows something you feel it shouldn't --- that you just got a 2FA code. You have to deeply understand how authentication systems work to catch why the message is suspicious.
You can't fix the problem with user education, because interacting with your application is almost always less than 1% of the mental energy your users spend doing their job, and they're simply not going to pay attention.
Nope, TOTP is vulnerable to phishing just the same.
Ordinary users think the fact your phishing site accepted their TOTP code is actually reassuring. After all, if you were a fake site, how would you have known that was the correct TOTP code? So this must be the real site.
The only benefit TOTP has over passwords is that an attacker needs to use it immediately, but they can fully automate that process so this only very slightly raises the barrier to entry, a smart but bored teenager can definitely do it, or just anybody who can Google for the tools.
Worse, TOTP involves a shared secret, so bad guys can steal it without you knowing. They probably won't steal it from your bank because the bank has at least some attempt at security, but a lot of other businesses you deal with aren't making much effort, and so your TOTP secret (not just the temporal codes) can be stolen, whereupon all users of that site relying on TOTP are 100% screwed.
Notice that WebAuthn still isn't damaged if you steal the auth data, Google could literally publish the WebAuthn authentication details (public key and identifier) for their employees on a site or paint them on a huge mural or something and not even make a material difference to their security - which is why this Memo says to do WebAuthn.
I was wondering the same thing - here's an article I found that describes both approaches. Not being in the cryptography space myself I can't comment on how accurate it is, but passes my engineering smell test.
Edit - sorry that this is really an ad for the writer's products. On the other hand, there's a hell of a bounty for proving them insecure / untrustworthy, whatever your feelings on "the other crypto".
MITM phishing attacks (also called real time phishing).
If someone just put a fake domain that proxies everything between you and the server (with fake domain with HTTPS... which he social engineered you to get on)
Looks like FIDO2 2FA only sign the challenge response against the server certificate available locally (= the phishing domain) so just passing it to the original server will fail. Also, the attacker can't just re-sign the challenge response after you, because the challenge was sent from the original server already encrypted with the public key of the user (stored from the registration phase). So only the registered user can see the challenge and respond to it.
This leaves only 2 options to do a phishing attack: 1) Get a valid certificate for the original domain [1] 2) force downgrade the user to old TOTP [2]
SMS are bad due to MITM and SIM cloning. In EU many banks still use smsTAN, and it leads to lots of security breaches. It's frustrating some don't offer any alternatives.
However, is FIDO2 better than chipTAN or similar? I like simple airgapped 2FAs, but I'm not an expert.
In particular [Thomas knows this, for anybody else reading], WebAuthn (the way you use FIDO for the web, U2F is a legacy system for doing the same thing that you should not use in greenfield deployments) recruits your web browser to defeat phishing.
When you use WebAuthn to sign into an site the browser takes responsibility for determining which site you're on, cutting out the whole phishing problem of "Humans don't know which site it is". The browser isn't reading that GIF that says "Real Bank Secure Login" at the top of the page or the title "Real Bank - Authenticate" or the part of the URL bar that says "/cgi-bin/login/secure/realbank/" it is looking only at the hostname it just verified for TLS which says fakebank.example
So the browser tells your FIDO authenticator OK, we're signing in to fakebank.example - and that's never going to successfully steal your Real Bank credentials because the correct name is cryptographically necessary for the credentials to work. This is so effective crooks aren't likely to even bother attacking it.
Given that every cybersecurity czar seems to publicly resign a few months after being appointed, what are the chances of these actually being implemented?
As with most OMB memos it is complete fantasy and agencies won't comply by the date, any date close to it or really ever. The answer to the question "who's going to be paying for it?" is nobody which is why it will never actually get done.
The real crux of the issue is the long-tail of applications which were never conceived with anything but network-based trust. I'm certain the DoD is absolutely packed with these, probably for nearly every workflow.
The reason this was so "easy" for Google (and some other companies, like GitLab[1]) to realize most of these goals is that they are a web-based technology company - fundamentally the tooling and scalable systems needed to get started were web so the transition were "free". Meaning, most of the internal apps were HTTP apps, built on internal systems, and the initial investment was just to make an existing proxied internal service, external and behind a context aware proxy [1].
The hard part for most other companies (and the DoD) is figuring out what to do with protocols and workflows that aren't http or otherwise proxyable.
Many workflows are proxyable using fine grained IP-level or TCP-level security. (I believe that Tailscale does more or less this.). This can’t support RBAC or per-user dynamic authentication particularly well, but it can at least avoid trusting an entire network.
Yeah, a thing that I wish Tailscale could do is hand off an attestation of some sort that says a TCP connection is being used by user X who is authorized by rule Y. Maybe "magic TLS client certs" is a thing coming on the horizon.
You can query the Tailscale API socket locally from your application to see who someone is (email address) based on the connecting IP. It would be nice if the API let you tap into their ACL system as well
Our corporate IT folks have a Zero Trust Manifesto in which there are enrolled devices (laptops that remote SREs can carry around), there are enterprise applications, and there's connectivity between these (e.g., tunnel pairs). SREs often need to write scripts that operate on sensitive production data from the enterprise applications, but must do this work directly on an enrolled device.
Pre-Zero-Trust days seemed safer. Copying production data to a laptop wasn't allowed. Instead, each SRE had their own Linux VM in the data center, accessible from home and able to run the scripts (with connectivity to the enterprise application). This prevented a whole class of realistic attacks in which a laptop (while unlocked/decrypted) is taken by an adversary. Admittedly, in return, we're protected from a possible, but less likely, attack in which a Linux VM is compromised and used for lateral movement within one segment of the enterprise network. (An enrolled device has to be in the user's possession; it can't be any machine, Linux or Windows, in the data center or office.)
The only people who love this are our enterprise application vendors. Our bosses are paying them a TON more money to implement new requirements where, in theory, all possible types of data analysis can be done directly within the enterprise application. No more scripts, no more copying of data. No more use of Open Source. And, of course, people from these same enterprise application vendors advise the government that Zero Trust must be a top priority mandate.
There isn't a great reason. It's just that the people who set the policy wanted to prevent lateral attacker movement at all costs. The VMs and the enterprise applications are in the same building, but no longer have network connectivity because they don't routinely communicate (in the past, a VM would make an outbound TCP connection only after an SRE decided that a problem existed).
The memo you're linking to was recently updated and did have force. The DOD, one of the largest federal agencies, issued its own memo with similar deadlines, and this & others have had the result of jumpstarting IPv6 / dual stack support for all of the major clouds & Kubernetes.
If FedRAMP qualification is tied to IPv6 support, you'll see every major contractor and cloud provider support it promptly.
If you look at the recent updates for cloud providers - AWS and GCP support for IPv6, Kubernetes going dual stack by default - you can see that this memo had a substantial impact.
Sometimes these things take time, but in this case, the recent memo you link to lit a fire under everyone.
What doesn't work with IPv6 on Azure? I understood they were first to support it of the major clouds, though maybe it was just in preview support for a long time.
Really pleasantly surprised at how progressive this memo is. It will be interesting to see the timelines put in place to make the transition.
Btw - I'd love to see the people who put this memo together re-evaluate the ID.me system they're implementing for citizens given how poor the identity verification is.
I have an issue with using ID.me for government websites, because it is a privately owned company. Online authentication at this point seems as important as USPS service and warrants being owned and developed by the government itself.
I have a login.gov account. Needless to say I'm not a US citizen, and the IRS should not cut me a refund check.
ID.me supports WebAuthn (or maybe U2F? In this context it doesn't matter) but importantly it does identity verification so it can determine whether I am a US citizen, whether I'm a tax payer, and if so which one.
Now, perhaps the US Federal Government should own the capability to do that instead of a private company. But, so far as I can tell, they do not and login.gov is not such a thing.
A private company shouldn’t own this capability. Especially as ID.me CEO lied about maintaining a database of every face “1-to-many” that uses their verification system. They’re right up there with Clearview AI.
Login.gov supports identity proofing. It’s part of the flow when signing up for a Social Security account using Login.gov at ssa.gov (Login.gov recently became their primary identity provider a few months back).
TOTP is not going anywhere for much of the Internet. Hold on while I get a Yuibikey to my dad who thinks "folders can't be in other folders" because that's not how they work in real life.
TOTP is a great security enhancement, and while phishable, considerably raises the bar for an attacker.
The fact that TOTP is mentioned as a bad practice in this document is an indicator that this should not be considered a general best practices guide. It is a valid best practice guide for a particular use case and particular user base.
I have a newbie question: Can't we embed a hardware key into a phone, and that'd be just as good as a Yubikey? Do we already do this, or is there a reason why we don't?
I want to disagree, but I can't, because you are right. Though perhaps as wearable tech grows(watches and what not), perhaps the keys will exist there also.
The document distinguishes between enterprise-facing and public-facing systems. For enterprise-facing (government employees, contractors, etc.), it's talking about discontinuing use of TOTP. For public-facing systems, it doesn't impose any restrictions, since (as you're saying) the general public really needs options.
the advantage of fido2/webauthn is actually biggest for non techies. tech people are the ones who won't fall for take bad phishing attempts. stopping malicious logins from fake sites is a massive win.
> Today’s email protocols use the STARTTLS protocol for encryption; it is laughably easy to do a protocol downgrade attack that turns off the encryption.
This can be solved with DANE, which is based on DNSSEC. When properly configured, the sending mailserver will force the use of STARTTLS with a trusted certificate. The STARTTLS+DANE combination has been a mandatory standard for governmental organizations in the Netherlands since 2016.
Of course it is. There are only a couple of email providers that actually matter, but out in the long tail of domains that might never receive a single non-spam email, there are plenty that are auto-signed by registrars. It's telling that's the best evidence you have, and not, like, "Google Mail uses DANE".
NIST has made this recommendation for years. Sadly, I work for another branch of the Federal government and despite the NIST guidance I still have to rotate my password every 60 days. (Actually, the starts sending me daily emails warning me 15 days out, and the date is based on last change, so practically it's more like 45 days.)
I know, it's been a while since rotation was considered a best practice. Yet the security team where I work will pick a random shiny new security practice and impose it on users. (I don't mind the imposition of good security, the hassle is worth it).
Just one example where I work is a prohibition against emailing certain types of documents or data to others in the company (which is mostly Word & Excel docs) Which seems reasonable, but the accepted solution is to use the built in encryption of MS Office to secure the file with a password and then email the file. And then send the password in another email. Honestly, that's supposed to be the protocol. The policy also hasn't been amended in any way to account for implementing Google docs & sheets, which can be accessed with the same credentials used for email or opened on any unattended employee's machine if they left a Gmail tab open (along with anything else in their Google drive). And regardless of any of these rules, almost no one follows them. I do-- I have to, I'm a data custodian so I can't violate the rules, but it annoys people.
I’m somewhat unhappy the “zero trust” terminology ha caught on. The technology is fine, but trust is an essential concept in many parts of life[0], and positioning it as something to be avoided or abolished will just further erode the relationships that define a peaceful and civil society.
0: trade only works if the sum of your trust in the legal system, intermediates, and counterparts reaches some threshold. The same is true of any interaction where the payoff is not immediate and assured, from taxes to marriage and friendship, and, no, it is not possible to eliminate it, nor would that be a society you’d want to live in. The only systems that do not rely on some trust that the other person isn’t going to kill them are maximum-security prisons and the US president’s security bubble. Both are asymmetric and still require trust in some people, just not all.
Minimizing trust should always be a goal of a security system. If you can minimize trust without harming usability, compatibility, capability, security, cost, etc... you should do it.
When we talk about trust we often mean different things:
* In cryptography and security by "trust" we mean a party or subsystems that if they fail or are compromised then the system may experience a failure. I need to trust that my local city is not putting lead in the drinking water. If someone could design plumping that removed lead from water and cost the same to install as regular pipes than cities should install those pipes to reduce the costs of a trust failure.
* In other settings when we talk about trust we are often talking about trust-worthiness. My local city is trustworthy so I can drink the tap water without fear of lead poisoning.
As a society we should both increase trustworthiness and reduce trust assumptions. Doing both of these will increase societal trust. I trust my city isn't putting lead in the drinking water because they are trustworthy but also because some independent agency tests the drinking water for lead. To build societal trust, verify.
The "trust" here largely refers to identity. Do you trust that everyone in your house is your relative, by virtue of the fact that they're in your house? That falls down when you have a burglar. Similarly, is it good to trust that everyone on your corporate network is an employee, and therefore should have employee-level access to all the resources on that network? I wouldn't recommend it.
No, but I trust the people I regularly interact with and therefore allow them to be in my home. Nobody trusts people just because they happen to be in their home. To the extend that trust can go to “zero”, my fear is it will harm the (existing) first form of trust, which is vital, and have little impact on the stupid latter definition of trust.
I know tech operates on different definitions/circumstances here. That’s why the word ”zero” is so wrong here, because it seems to go out of its way to make the claim that less trust ks always better.
Call it “zero misplaced trust” or “my database doesn’t want your lolly”, whatever.
I see this as the exact point of the Zero Trust terminology.
People extend your exact trust assertions to their networks, and bad actors exploit it to effect a compromise. A corporate network cannot be like your home. Zero Trust says that you should assume anything, and anyone, can be exploited - so secure appropriately.
Per your analogy, what would you do if your invited houseguests, unbeknownst even to themselves, wore a camera for reconnaissance by a 3rd party? What would you do if these cameras were so easy to hide that anyone, at any time, might be wearing one and you couldn't know?
You would have to assume that anyone that entered your home had a camera on them. You would give them no more access than the bare minimum needed to do whatever they were there to do (whether eat dinner or fix your sink). You'd identify them, track their movement, and keep records.
Your term, "Zero misplaced trust," assumes that you can identify where to place trust. Did you trust that system you had validated and scanned for 5 years...until Log4shell was discovered? Did you trust the 20-year veteran researcher before they plugged in a USB without knowing their kid borrowed it and infected it?
Zero Trust is a response to the failure of "trust but verify."
The terminology stems from "zero trusting" the network you're in - just because someone can talk to a system doesn't mean they should be able to do anything; the user (via their user agent) should be forced to prove who they say they are before you trust them and before anything can be carried out.
No connection really but made me think of bidens tweet
@POTUS 8h
In 2021, we had the fastest economic growth since 1984. The Biden economic plan is working, folks.
"Zero assumption" would have been a better phrase, but that horse is not just out of the stable, he's met a nice lady horse and is raising a family of foals and grand-foals.
Couldn't agree more on this being bad terminology. Something is always implicitly trusted. Whether it's your root CA certificates, your Infineon TPM, the Intel hardware in your box, or something else. When I first saw this term pop-up I thought it meant something completely different than it does, I guess because of the domain I work in.
> “discontinue support for protocols that register phone numbers for SMS or voice calls, supply one-time codes, or receive push notifications."
... necessarily means TOTP.
Could be argued "supply" means code-over-the-wire, so all 3 being things with a threat of MITM or interception: SMS, calls, "supply" of codes, or push. Taken that way, all three fail the "something I have" check. So arguably one could take "supply one-time codes" to rule out both what HSBC does, but also what Apple does pushing a one-time code displayed together with a map to a different device (but sometimes the same device).
I'd argue TOTP is more akin to an open soft hardware token, as after initial delivery it works entirely offline, and passes the "something I have" check.
No, I'd expect it does include TOTP. Read it as "discontinue support for protocols that supply one-time codes". A TOTP app would fall under that description.
TOTP apps are certainly better than getting codes via SMS, but they're still susceptible to phishing. The normal attack there is that the attacker (who has already figured out your password) signs into your bank account, gets the MFA prompt, and then sends an SMS to the victim, saying something like "Hello, this is a security check from Your Super Secure Bank. Please respond with the current code from your Authenticator app." Then they get the code and enter it on their side, and are logged into your bank account. Sure, many people will not fall for this, but some people will, and that minority still makes this attack worthwhile.
A hardware security token isn't vulnerable to this sort of attack.
I'd expect it to be any mechanism that doesn't do mutual authentication. In other words the authentication not only proves to the service your "you", it also proves to you the service is the one you think you are authenticating to. And it does that reliably even in the face of a MITM attack.
It's damned hard to do, and obviously none of SMS, TOTP and passwords do it. https + passwords was supposed to do it and technically does do it, but in practice no one looks at the domain name. Email + DKIM could do it, but no email client shows you outcome of DKIM auth and again no one would look at that anyway.
WebAuthn / FIDO2 does do it. It's undoubtedly the best option right now, but until tokens that open source + reproducible build right down to the metal, they aren't "Zero-Trust". You are forced to trust Yubi or Google or whatever as the tokens they give you are effectively black boxes. Worse, because an open source token means "easily build-able many companies" and thus means "WebAuthn tokens become a commodity", I expect Yubi to fight it to their dying breath.
However the set of attackers who can get any advantage from the laptop sat on a conference table, much less your desk at home or in the office building, is both different and much less scary than the arbitrary crooks phishing people from the far side of the Internet.
That's an interesting subject, since there has been a lot of government push for PIV but the internet has essentially decided that FIDO2/webauthn are the way forward and making them work with PIV is non-trivial.
"This project is proof-of-concept and a research platform. It is NOT meant for a daily usage. The cryptography implementations are not resistent against side-channel attacks."
There are a lot of other providers in this space, Yubico are the best known and probably one of the more competent offerings, but this is not a situation where the government is picking a winner by picking a standard.
A bunch of situations aren't going to end up with a separate physical authenticator anyway, they'll do WebAuthn, which in principle could be a Yubico Security Key or any of a dozen competitor products - but actually it's the contractor's iPhone, which can do the exact same trick. Or maybe it's a Pixel, or whatever the high-end Samsung phone is today.
That's what standardisation gets us. If CoolPhone Co. build a phone that actually uses a retina scan to unlock, they can do WebAuthn and deliver that security to your systems without you even touching your software. And yes, in the Hollywood movie version the tricky part is the synthetic eyeball so as to trick the retina scanner, but in the real world the problem is after you steal the ambassador's CoolPhone she can't play Wordle and she reports the problem to IT before you can conduct your break-in, synthetic eyeball or not.
There are not a lot of providers on the FIPS list though. Coupled with the fact that virtually all government employees use Windows computers, and you end up right back where we started. The only real competition is Windows hello.
The various auth apps are problematic because they usually come with some kind of requirement for intune or similar to do remote attestation. That's a weird place for the government to be with contractors, since a lot of those contacts don't have language requiring that contractors have a phone at all, much less that they allow the federal government to MDM it.
It could be providers other than yubico, but it won't be.
>Meanwhile encryption with PGP has been a complete failure, due to problems with key distribution and user experience.
Encrypted messaging has been a complete failure; there is no need to single out email. I suspect the reason is more or less the same in all cases. Users have not been provided with a conceptual framework that would allow them to use the tools in a reasonable way. If the US federal government can come up with, and promote such a framework the world would become a different place.
BTW, the linked article is mostly based on misconceptions:
> Encrypted messaging has been a complete failure; there is no need to single out email.
Can you elaborate on why you see it this way? WhatsApp has been wildly successful, my very non-technical in-laws use Signal for their family's conversations, and other messaging platforms are jumping on the bandwagon.
As far as I can tell, if we lose encrypted messaging at this point, it will be due to government action or corporate rug-pulling, not because it failed to catch on. Whereas encrypted email really hasn't caught on anywhere.
>WhatsApp has been wildly successful, my very non-technical in-laws use Signal for their family's conversations, and other messaging platforms are jumping on the bandwagon.
You only get effective end to end encryption if you can verify that you are talking to who you think you are talking to. Otherwise the people that are running the system can cause your messages to take an unencrypted detour and thus be able to read them. This is often called a man in the middle attack. Verifying identities normally means checking some sort of long identity number. Very few people know how to do that in an effective way.
For example: in a usability study involving Signal[1], 21 out of 28 computer science students failed to establish and maintain a secure end to end encrypted connection. The usability of end to end encrypted messaging is a serious issue. We should not kid ourselves into thinking it is a solved issue.
PGP in a sense is actually better here in that it forces the user to comprehend the existence of a key in a way where it is intuitively obvious that it is important to know where that key came from.
To call encrypted messaging a complete failure you have to demonstrate that the percentage of people capable of maintaining secure messaging is stagnant. As far as I can see, the opposite is true. It is easier than ever to establish and maintain a secure communication channel.
The Signal study showed that the majority of people were unable to understand Signal's security features, but not that the security model is broken. The question at hand isn't how many people are using it wrong but how many people are using it right that never could have managed to do so with PGP keys. If even 10% of Signal's users successfully maintain a secure channel, you're looking at around 5 million people, most of whom probably would not have been able to set up secure messaging without Signal.
Do we still have work to do? Of course! But that doesn't mean that we've failed in our efforts so far.
That assumes that usability is actually getting better. There is no evidence that this is the case from usability studies. This is not a new problem and we have known what is wrong for something like 20 years now. This isn't something I just thought of. See: Why Johnny Can't Encrypt[1].
BankID: A system with a secret spec, where the bank holds your secret key, there is no transparency log whatsoever (so you have no idea what your bank used that secret key for), can be used to authenticate as yourself almost everywhere, and where you can get huge, legally binding bank loans in minutes (and transfer the money away) with no further authentication.
Oh, and if you choose to not participate in this system, enjoy trying to find out the results of your covid test :-) (I ended up getting a Buypass card, but they officially support only Windows and macOS.)
We have that in Sweden too. As an expat it's a complete nightmare for me from day one. Getting my bank to successfully issue it was impossible.
First, in the days before mobile bank-id, they sent windows-only hardware as I recall. Then came the days of letters/cards/hardware getting lost in the mail.
I gave up on it in the end. I have multiple things (banking-wise) I no longer have online access to because of it.
If you're going to make one system to rule them all you need to make sure the logistics actually work.
(3 years ago I moved to Norway)
It took me about a month to get into the system, but once I had my national ID it took about a week for my MFA dongle to arrive. After that It has been a great experience.
There's significant bi-partisan resistance, in the US, to anything like a national ID, unfortunately, with the result that we have one anyway (because of course we do, the modern world doesn't work without it) it's just an ad-hoc combination of other forms of ID, terrible to work with, heavily reliant on commercial 3rd parties, unreliable, and laughably insecure. But the end result is still a whole bunch of public and private databases that personally identify us and contain tons of information—kind of by necessity, actually, since our ID is a combination of tons of things.
It's a very frustrating situation. Worst of both worlds.
I've done some thinking about this, and a possible solution is a bunch of cross signed CA's like the Federal common policy / FPKI for cross trust amongst federal agencies, but done at a state DMV / DPS level. Driver's licenses / state IDs could have certs embedded into the cards and then be used for things like accessing government websites, banks, etc. Yes there are some access concerns, and some privacy concerns that this is in essence a national ID, but what we have now is horribly broken, and we're already being tracked. We get all the downside of pervasive tracking, but none of the upside.
Here in Czechia we have BankID and it is problematic:
1) No verification that the user trusts that particular bank to perform this service. Most banks just deployed BankID for all their customers.
2) No verification between bank and government ensuring that particular person can be represented by particular bank. In principle a bank could inpersonate a person even if that person have no legal relation with that bank.
3) Bank authentication is generally bad. Either login+SMS, or proprietary smartphone applications. No FIDO U2F or any token based systems.
Fortunately, there are also alternatives for identification to government services:
1) Government ID card with smartcard chip. But not everyone has a new version of ID card (old version does not have chip). It also requires separate hardware (smartcard reader) and some software middleware.
2) MojeID service (mojeid.cz) that uses FIDO U2F token.
Disclaimer: working for CZ.NIC org that also offers MojeID service.
#2 and partially #1 are solved by regulation and reputation: banks are highly regulated business, and BankID support requires specific security audit.
Ad #3: FIDO is basically unusable for banking. It's designed for user authentication, not transaction signatures which banks need (and must do because of the PSD2 regulation).
If banks were actually onboard with this stuff, I'm pretty sure you can either make this happen in FIDO2 anyway, or you could add a FIDO extension that does it and get big vendors like Yubico to support that extension. Notice that off-line authenticating a Windows 10 PC relies on hmac-secret in FIDO, which is not a core FIDO feature, but it got ratified because there's a use for it, and a Yubikey can do hmac-secret.
But I do not see any such engagement from banks.
Transaction signatures are good if well implemented, but I'm not seeing a lot of good implementations. To be effective the user needs to understand what's going on so that they're appropriately suspicious when approached by crooks.
e.g. if I just know I had to enter 58430012 to send my niece $12, I don't end up learning why and when crooks persuade me to enter 58436500 I won't spot that this is actually authorising a $6500 transfer and I should be alarmed.
I think the FIDO Alliance is already discussing solutions to these use cases. (And also this is a bit circular reasoning, isn’t it? “Why don’t you use the XYZ standard? Because it does not support our use case. So why don’t you cooperate on adding support to the standard? Why? So that you can use the XYZ standard!”) Also, I think there already are extensions supporting some basic forms of this, however, they are not supported very well.
But I’m afraid the basic prerequisite of secure transaction signing (“what you see is what you sign”) cannot be fulfilled on a generic “FIDO2 authenticator” – you need the authenticator to have a display. Sure, Windows Hello / Android FIDO / … might support this, but your common hardware Yubikey cannot.
I don’t know to which authentication method used by which bank in which country you refer in your “58430012” example, but this is definitely nothing which could be used as a method of transaction signatures in banks here, and it does not fulfill the requirements of the PSD2 regulation.
Which requirement of PSD2 do you think is so stringent?
I have three bank accounts here:
One of them (my good bank) has a chiclet keypad physical authenticator which needs these manual codes entering to get a value back that proves I used the authenticator.
The large European bank that handles my salary and so on, relies on SMS entirely, I ask to perform a transaction, they send an SMS with a code, I type it into a box on the web site. The SMS is trying to tell me what that transaction is, and has improved (it used to say things like GBP20000 which, yes everybody on Hacker News knows what that means but I bet my grandmother wouldn't, today it says £20 000 which is easier to understand) but notice that the code you get isn't related to the transaction details, it's just an arbitrary code. So I needn't understand the transaction to copy-paste the code.
The third bank is owned by the British government and so is inherently safe with unlimited funds unlike a commercial bank (they can and do print money to fund withdrawals, they're the government) but they too use SMS and their SMS messages are... not good. Of course unlike a commercial bank if they get fined for not obeying security rules that's the government fining the government, who cares?
FIDO would be obviously better than the latter two, and I don't see any reason that (with some effort) it couldn't improve on the first one as well.
Oh, I misunderstood. You enter the mentioned code into an authentication calculator which emits the signature code which is then used. Yeah, that probably fulfills the PSD2 requirements, though I agree it's not exactly good UX and very secure for common users. That (well, and mostly the cost) is the reason everyone goes to mobile authentication apps nowadays.
SMS authentication is... well by one reading of PSD2, it's not acceptable. But in real world, it is basically necessary, and not _that_ insecure (if you ignore SIM swapping attacks etc.). The WYSIWYS aspect comes not from the code but from the message text, which is crucial (and per PSD2, should include at least the amount and... receiver? I forgot). But sure, if people don't read or understand the message, it's not ideal...
While FIDO provides better phishing resistance (than SMS, not necessarily than authentication apps), it doesn't protect against transaction modification (e.g. man in the browser) and for people who care about and understand security, it is strictly worse.
> (than SMS, not necessarily than authentication apps)
Very dubious. The trick to phishing is that humans are easily confused about what's going on, and WebAuthn recruits the browser to fix that completely. Your browser isn't confused, the browser knows it is talking to fakebank.example because that's the DNS name which is its business, even if this looks exactly like the Real Bank web site, perfect to the pixel and even fakes the browser chrome to have a URL bar that says realbank.example as you expected.
I don't see bank authentication apps helping here. It's very easy to accidentally reassure the poor humans everything is fine when they're being robbed, because the authentication part seemed to work.
I'm somebody who really cares about and would like to think they understand security very much, and I don't think it's strictly worse at all.
One of the things banks have an ongoing problem with is insider facilitated crime. Which means secrets are a big problem, because the bank (and thus, crooked staff working for the bank) know those secrets. Most of these PSD2 "compliant" solutions rely on secrets, and so are vulnerable to bank insiders. FIDO avoids that because it doesn't rely on secrets†.
† Technically a typical Security Key has a "secret" key [typically 256-bit AES] baked inside it, but a better word would be symmetric rather than secret, there is no other copy of that symmetric key, so it isn't functionally secret.
> While FIDO provides better phishing resistance (than SMS, not necessarily than authentication apps), it doesn't protect against transaction modification (e.g. man in the browser) and for people who care about and understand security, it is strictly worse.
'man in the browser' seems like a situation where the user's device is compromised. In that case it is not big stretch that not only browser could be compromised, but also SMS reading app is compromised.
I.e., the reasonable security request should not be security against 'man in the browser', but security against 'user device is compromised'. In that case SMS is worse, as attacker could completely bypass it, while for FIDO it still need to phish the user to press the button.
The problem with BankID is that for older accounts, there's no real guarantee you are who you claim to be.
I mean, sure, my bank in Norway has my account tied to a person number, but they don't actually know that when I log in with bankid that I really am the person associated with that person number. --Theoretically the post office was supposed to verify my identity before they gave me the packet containing the code brick, but they forgot to do so - this was over 10 years ago before they had to register the ID details.
So basically I have a highly trusted way of authenticating to financial and government services in Norway even though nobody actually knows that I am who I claimed to be when I opened the bank account, setup bankid, etc.
There is a large contingent of non-religious people who are against it on civil liberties grounds. The resistance to it truly crosses both parties, and it requires the cooperation of the States, which makes it politically non-viable as a practical matter.
The thing I don't get about the non-religious arguments is that we already have a national ID, it's just a patchwork system of unreliable, not-particularly-secure forms of identification that are a pain in the ass for a regular citizen to have to deal with. And the REAL ID stuff essentially makes state IDs conform to a national ID specification anyway.
And regardless, if you do want a national US ID, you just get a passport, and it'll be accepted as a form of ID everywhere a state-issued driver's license or state ID is accepted. Of course, in this case it's technically voluntary, and many Americans don't travel internationally and don't bother to get a passport.
Many State governments do not recognize a US passport as valid ID. This was unexpected when I first encountered an example of it, but apparently that is normal and I was just the last person to find out. The REAL ID legislation only regulates processing and format, there is no enforceable requirement to share that with the Federal government and many States (both red and blue) do not in practice. States recognize the ID of other States, as is required by the Constitution.
Because there is no official national ID system, you can do virtually everything Federally with a stack of affidavits and pretty thin "evidence" that you are who you claim to be. They strongly prefer that you have something resembling ID but it isn't strictly required. This also creates a national ID bootstrapping problem insofar as millions of Americans don't have proof that they are Americans because there was never a requirement of having documentary evidence. As a consequence, government processes are forgiving of people that have no "real" identification documents because so many people have fallen through the cracks historically.
Of course, this has been widely abused historically, so the US government has relatively sophisticated methods for "duck typing" identities by inference these days.
> The thing I don't get about the non-religious arguments is that we already have a national ID, it's just a patchwork system of unreliable, not-particularly-secure forms of identification
Yes, and this unreliable patchwork is already being heavily abused by surveillance companies (eg Equifax, Google, LexisNexis, Facebook, Retail Equation, etc) involuntarily storing our personal information - creating permanent records on us that we can only guess the contents and scope of, sorting us into prescriptive classes so that we can be better managed, and completely unaccountable to even their most egregious victims.
Social security numbers were promised to only be used for purposes of administering social security, and yet now they're required by many businesses for keying into that surveillance matrix. The main thing holding back more businesses from asking for identifiers is that people are hesitant to give them out.
Before there is any talk of strengthening identification, we need a US GDPR codifying a basic right to privacy. Until I'm able to fully control the surveillance industry's dossiers on me (inspection, selective deletion, prohibit future collection), I'll oppose anything that would further empower them.
> Before there is any talk of strengthening identification, we need a US GDPR codifying a basic right to privacy.
That's a fair point, agreed. Privacy needs to be legally recognized as a strong right before we allow more centralization of this sort of thing. (Though sadly it's already pretty centralized, just not by the federal government.)
Which is why you ignore them. No reason for a nation to be held back by this type of person. Same reason you don’t take cancer treatment advice from someone who suggests juicing.
It was expedient but banks are not the orgs. that should be running that.
Every nation needs to turn their Drivers ID and Passport authorities into 'Ministry of Identity' and issue fobs, passwords that can be used on the basis of some standard. Or something like that, maybe quasi distributed.
I hear people say all the time that, in the US, the Postal Service would be great for this, and I can't help but agree. Sure, they'd have to develop in-house expertise around these sorts of security systems (just as any new federal government agency put in charge of this would have to do), which could be difficult. But they have the ability to distribute forms, documentation, and tokens to pretty much everyone in the US, with physical locations nearly everywhere that can be used to reach those who don't have physical addresses.
I wonder if the recommendation for context-aware auth also includes broader adoption of Impossible Travel style checks?
For context, Impossible Travel is typically defined as an absolute minimum travel time between two points based on the geographical distance between them, with the points themselves being derived from event-associated IPs via geolocation
The idea is that if a pair of events breaches that minimum travel time by some threshold, it's a sign of credential compromise; It's effective for mitigating active session theft, for example, as any out of region access would violate the aforementioned minimum travel time between locations and produce a detectable anomaly
Geolocation is often unreliable. There's no sure way to go from IP address to accurate location, its all based on guesses on how things got routed previously. My previous home routinely showed up as a different country in many different geoip databases, so for me something like that would have always instant-banned me if I switched from cellular (which a lot of databases places me about 100mi away from my home) to home WiFi that would show a jump of 1,000mi.
Even giant orgs like Google who should be good at this will fail at this. I've had services with their Cloud Armor set to disallow connectivity from non-US connections, and yet connections in the US get flagged as non-US even when a traceroute shows no hops going overseas.
Is this practical? I would imagine with how peering can get better/worse in an instant (and continuously change as different routers pick up new routes) you can't use ping to measure this, and geoip databases don't seem like a source you could trust, especially with CGNAT throwing you onto some generic IP with a geoIP that everyone else in a 200 mile radius also gets.
Any kind of tunnel or VPN would also mess with the minimum travel time. This seems like a good way to cause more problems for regular people just trying to log in from slightly unusual network configurations than for any hypothetical man-in-the-middle.
From my experience it isn't a be-all-end-all so much as another point of data for anomaly detection.
In practice, geoip "regions" like used for this _are_ on the larger scale, yes; However that still lets you ask valuable questions like "why is this user who logs in from Vermont, USA suddenly in Hungary?" and potentially do something proactive like limiting that session's resource access until a new MFA challenge has been passed, or ( more aggressively ) destroy the session or otherwise force a full reauth.
The downside is that this almost always relies on some actively maintained geoip database ( ala maxmind ), which... well it isn't exactly cheap, and it isn't exactly perfect, either ( see: maxmind historically putting IPs in a central location when lacking specific data )
Ultimately it's one part of ( what should be ) a suite of checks for anomalous behavior, not something to blindly implement. The latter can cause a great deal of grief if your tolerances are too tight or your proactive actions aren't in line with the activity/abuse they're intended to mitigate.
A pretty direct example of proactive action would be restricting access to using saved payment methods on a platform until the user has completed a new 2fa challenge.
You could require this challenge every time the user wants to buy something, yes, but as that will probably impact checkout rates, you could get many ( if not most ) of the same fraud prevention benefits by only challenging when the session has moved some threshold distance - It won't stop them from buying something at home or a coffee shop in town, but would stop a session hijacker on an opposing coast or another country from doing so.
> "why is this user who logs in from Vermont, USA suddenly in Hungary?"
Maybe because the new login is from a hacker. Maybe because your geoip database provider is unreliable. Either one is likely. There's no sure way to go from an IP address to a location.
Right; I covered that explicitly further down in the post.
I wish I had made it more clear in my original post that Impossible Traveler checks are not a magic bullet, as most are assuming that this would be used all on its own for whether to bar access.
Most likely GeoIP information is used as one of many inputs to a neural net that decides whether you can log on or not (see tons of "Google locked me out" examples).
I'm constantly traveling between Seattle (home) and San Jose (vpn1) or Boardman, OR (us-west-2) according to "logged in from X", so doesn't really work unless you offsetting rules/attributes.
This sounds really beautiful, and I am saving the link for future reference.
I'm curious about the DNS encryption recommendation. My impression was that DNSSEC was kind of frowned upon as doing nothing that provides real security, at least according to the folks I try to pay attention to. Are these due to differing perspectives in conflict, or am I missing something?
DNSSEC is security in the other direction( DNS server -> client ). All DNSSEC does is securely sign all the responses to DNS queries.
so DNSSEC is the answer to, can I trust this IP is valid for the name news.ycombinator.com.
DNS over TLS/HTTPS just says, nobody but the DNS server I use can see I'm wanting news.ycombinator.com's IP. It's mostly useless at the moment, since other gaps exist leaking essentially the same information(SNI, etc), but it should get more useful over time, as people are working on fixing those gaps.
QUIC and ECH cover the leakage of host information already, so DNS over TLS/HTTPS is plenty useful already IMO. Just needs the hosts to upgrade to support ECH or QUIC. Plus it's never bad to cover your bases, DNS queries are for more than just the browser (ie, for example the SRV record which an application may use for connection data or TXT records for configuration or ACME).
> Do not give long-lived credentials to your users.
This screams "we'll use more post-it notes for our passwords compared to before", or maybe the real world to which this memo is addressed is different compared to the real (work-related) world I know.
This was a very unfortunate choice of words by the author, as they don't mean credentials as in the credential a user uses to initially authenticate to the system. Rather they mean authentication tokens, be they Kerberos tickets, bearer tokens, etc.
This memo in particular emphasizes the existing guidance the US government has issued around not expiring passwords. If you are a federal agency, you can have (and are in fact encouraged to have!) users with passwords that are unchanged for years.
Edit: it's worth pointing out that the memo does a great job of laying this out. I work in security, so possibly there's some curse of knowledge at play, but I found the blog post explainer to be less clear than the memo it is explaining...
It specifically calls out not requiring regular password rotation. Short-lived credentials is for tokens with expiration, not the password you use to login to the service that gives you the token.
The general attitude among practitioners now is that "post-it notes with passwords on them" is superior to the more common practice of "shitty passwords shared across multiple services".
> Do not give long-lived credentials to your users.
That was poorly worded in the article. Among the things it is is saying you should give to your users is a WebAuthn token. Inside the WebAuthn token is a random private key it never reveals. That is the thing the authentication "you are you" ultimately relies on, and it is very much a "long lived credential".
What he is trying to say is more complex. It's something along the lines of "you go to some authentication / authorisation service, prove you're you and say you want access to a service, and it hands you back some short term credentials you can provide to that service allowing you to use it". You, the authentication provider, and the service you trying to access might be in different countries. The danger in that scenario is someone might steal those credentials while they are in transit. One way to mitigate that is to ensure those credentials don't last for very long.
So, it's a statement about how distributed systems should handle passing credentials among themselves. The user never sees these credentials, and of course never has to remember them. Any temporary credential lasting longer than a persons sleep/wake cycle it considered broken in this world, but it's understood the user will carry with them a relatively long lived way of proving they are who they say they are.
You only need two of the three factors to make it multifactor. A computer with a fingerprint scanner could be enough if it’s implemented properly. It’s something you have (a specific computer with a specific fingerprint scanner) with something you are (the fingerprint). In practice, you also need to know an identifier like an email or username, but those are often considered public.
> “Enterprise applications should be able to be used over the public internet.”
Isn’t exposing your internal domains and systems outside VPN-gated access a risk? My understanding is this means internaltool.faang.com should now be publicly accessible.
It's a different framing to get rid of figleafs. Everything has to be built so that it actually has a chance of being secure - if your state of mind is "this is exposed to the public internet", BS excuses like "this is only exposed to the TotallySecure intranet" don't work any more, because they don't work in the first place. Perimeter security only works in exceedingly narrow circumstances which don't apply - and haven't applied for a long time[1] - to 99.999 % of corporate networks.
[1] Perimeter-oriented security thinking is probably the #1 enabler for ransomware and lateral movement of attackers in general.
For anyone confused about the term "figleaf", I assume it's a reference to fig leafs being used by Renaissance artists to mask genitalia. So "things concealing the naked truth" approximately.
It is a risk. The discourse on VPNs is messy. It's true that you shouldn't rely solely on VPNs for access control to applications. It's also true that putting important services behind a VPN significantly reduces your attack surface, and also puts you in a position to get your arms around monitoring access.
The right way to set this stuff up is to have a strong modern VPN (preferably using WireGuard, because the implementations of every other VPN protocol are pretty unsafe) with SSO integration, and to have the applications exposed by that VPN also integrate with your SSO. Your users are generally on the VPN all day, and they're logging in to individual applications or SSH servers via Okta or Google.
I don't like VPNs. I think there's better ways of protecting our infrastructure without them. AWS offers a lot of technologies for doing just that.
A VPN is another failure layer that when it goes down all of your remote workers are hosed. The productivity losses are immense. I've seen it first-hand. The same for bastion hosts. Some tiny misconfiguration that sneaks in and everybody is fubared.
Bastion hosts and VPNs: we have better ways of protecting our valuables that's also a huge win for worker mobility and security.
That's true of legacy VPNs like OpenVPN, and less true of modern VPNs. But either way: a VPN is a meaningful attack surface reduction for all internal apps that don't require individual apps to opt-in or stage changes for, and doesn't require point-by-point auditing of every app. Most organizations I've worked with would be hard-pressed to even generate an inventory of all their internal apps, let alone an assurance that they're properly employing web application security techniques to ensure that they're safe to expose on the Internet.
It's not clear to me that a VPN endpoint is a meaningfully smaller attack surface than an authenticating proxy? The VPN approach has a couple of downsides:
* You don't have a central location to perform more granular access control. Per-service context aware access restrictions (device state, host location, that sort of thing) need to be punted down to the services rather than being centrally managed.
* Device state validation is either a one-shot event or, again, needs to be incorporated into the services rather than just living in one place.
I love Wireguard and there's a whole bunch of problems it solves, but I really don't see a need for a VPN for access to most corporate resources.
Sure: an authenticating proxy serves the same purpose. I agree that unless you have a pretty clever VPN configuration, you're losing the device context stuff, which matters a lot in some places.
I'd do:
* SSO integration on all internal apps.
* An authenticating proxy if the org that owned it was sharp and had total institutional buy-in both from developers and from ops.
I'm having trouble understanding what the fundamental difference is between these. Is it just a matter of a single, centralized proxy at the perimeter of your service network versus in-service SSO? Is there a functional difference between being in the same process space versus a sidecar on the same host versus a service on another host?
Ultimately it boils down to trusting the authority, whether that's a function (code review), a shared library (BOM), an RPC (TLS), a sidecar (kernel), or a foreign service (mTLS). There are different strengths and weaknesses for each of these, but it's not clear to me that the options you would prefer are distinctly more or less secure -- maybe there is an argument for defense in depth, but I'm not certain that's what you're pitching.
Most SSO solutions don't verify device identity or state, so you're not ensuring that the connection is coming from a computer you trust running software you trust.
I guess it's a matter of what the IdP attests. It's definitely possible for an IdP like Okta to include a ton of client details as part of the attestation payload. Stuff like GeoIP, client certificate fields, MDM status, etc.
If you have the institutional buy-in to handle auth being done at the proxy level, that gets you away from having to implement SSO per service. I agree that doing this well isn't trivial, but in the long term there's a reasonably compelling argument that it makes life easier for both developers and ops.
Thanks for saying this. This was exactly my take. Saying goodbye to VPNs just completely ignores the risk of RCE vulnerabilities on your services. You can have a VPN that still brings you into a zero trust network.
It does essentially define away the authentication bypass problem, which is a class of vulnerability we still regularly find in modern web applications. To say nothing of the fact that no human has ever implemented a SAML RP without a game-over vulnerability. Seems like a self-evidently bad plan.
There are different ways to look at it. From a defense-in-depth perspective, you are right. That is, however, one of the main points of a zero-trust environment (or you could say Zero Trust), which is a kind-of-new trend that much has been written about.
Think about it this way: In the context of ransomware attacks, a lot of times it's game over once an internal agent is compromised. The premise of zero trust is that once an attacker is "inside the wall", they gain basically nothing. Compromising one service or host would mean having no venue for escalation from there.
I wouldn't say it's objectively better (maybe by the time I retire I can make a call on that), but it's a valid strategy. Certainly better than relying on perimeter-based security like VPN alone, as opposed to it being just one layer of DiD, though.
buganizer.corp.google.com is an alias for uberproxy.l.google.com.
uberproxy.l.google.com has address 142.250.141.129
uberproxy.l.google.com has IPv6 address 2607:f8b0:4023:c0b::81
Google's corp services are publicly accessible in that sense - but you're not getting through the proxy without valid credentials and (in most cases) device identity verification.
As I understand it, this sentence says that the application should be safe even if it was exposed to the public internet, not that it needs to be exposed. It is a good practice to securize everything even if visible only internally. The "perimeter defense" given by a VPN can be a plus, but never the only line of defense.
The memo does say each agency needs to pick one system that is not internet accessible and make it accessible in the next year. The way I read this memo is pushing that VPNs don't add much in the way of security (if you follow the rest of the memo) and should be removed.
The other way to read that part of the memo is that the exercise of exposing an application on the public Internet is a forcing function that will require agencies to build application security skills necessary whether or not they use VPNs. Note that the memo demands agencies find a single FISMA-Moderate service to expose.
"Further, Federal applications cannot rely on network perimeter protections to guard against unauthorized access. Users should log into applications, rather than networks, and enterprise applications should eventually be able to be used over the public internet. In the near-term, every application should be treated as internet-accessible from a security perspective. As this approach is implemented, agencies will be expected to stop requiring application access be routed through specific networks, consistent with CISA’s zero trust maturity model."
"Actions … 4. Agencies must identify at least one internal-facing FISMA Moderate application and make it fully operational and accessible over the public internet."
It's true that they didn't mandate detecting and blocking accesses from VPNs, if the user chooses to connect through one. However, they pretty clearly are saying that the application should be exposed to the public Internet, which is the opposite of what enriquto claimed[0] earlier in this thread:
> As I understand it, this sentence says that the application should be safe even if it was exposed to the public internet, not that it needs to be exposed.
The thing is that over-focus on perimeter security is still a huge problem, and one reason that e.g. ransomware owns orgs with depressing regularity. There's nothing wrong with perimeter controls in and of themselves. But they become a substitute for actually security what's on the internal network, so once you've bypassed the perimeter, it's all too easy to roam at will.
The people over-relying on perimeter security are the folks buying a big sixties car and assuming that seatbelts and traction control are no substitute for chrome bumpers.
The point isn't to actually expose your internal services. It's to not assume that attackers can't breach your network perimeter. Internal traffic should be treated with the same level of trust as external traffic, that is, none at all.
In a true Zero Trust model, every client device would have the minimum number of network permissions necessary to do its job - as would every other device. Every device could only connect to known good/known necessary endpoints over specific ports and protocols. All else would be blocked.[1]
If the client device were compromised with a zero day exploit, the blast radius would be substantially smaller, the difficulty of an attacker mapping a network for later exploit would be exponentially larger, and time to response would dramatically shrink.
[1] (This is particularly relevant for fixed-function IoT and Operational Technology devices. General computing devices need broader controls, but again - the minimum necessary for that user, in that business context, to do their job.)
For instance the zero-trust system we are building at bastion-zero uses ephemeral ECDSA key pairs attested by tokens that expire.
+ When the user logs out these key pairs and tokens are deleted. Ideally tokens should be revoked as well. If an attacker installs an implant on the user's endhost and the user is not logged in the attacker doesn't get any access because there is no keys/tokens to steal. If the implant/attack is discovered prior to the user logging the device can be reset and the attacker doesn't get any access.
+ If the attacker installs an implant on the user's endhost and the user is logged in. The attacker gets the key pair and tokens. If the attacker attempts to exfil the key pair/tokens and use them from another host this may set off alarms. The attacker if they wish to be stealthy and maintain access must conduct the attack through that endhost (at least include they compromise additional systems). Once the tokens expire the attacker is locked out again.
+ If the attacker manages to watch the user login and generate the attestation to the key pair and good MFA is employed, e.g. U2F/FIDO. The attacker can not eternally get new key pairs since they can not read the secret from the MFA device.
+ As wmf suggests, monitoring helps a lot. Monitoring is extra powerful when can easily revoke the user's key pair without revoking the user. Say a user triggers an alarm, automatically revoke the key pair and see if they can reauth. If it is a stolen key pair the attacker might not be able get a new key pair issues if the actually user is offline. If you decide the device might be compromised you can disable access from that device and have user pick up a new laptop.
They should have given out free identify FOBs with vaccines.
I'm only half joking.
It's really just a matter of changing gears - you carry a physical key to your house, car, and your online life. You lose the key, you have to go through a bit of pain to get a new one.
But establishing that norm is beyond the purview of anyone it seems.
Perhaps one of those advanced Nordic countries will have the wherewithal, it seems Estonia is ahead of all of us but we don't pay attention.
Tangential remark to author if they’re reading: favoring “she” isn’t more inclusive for an unknown pronoun. You probably already use a non-gendered singular “they” in normal speech and you could use that where the gender/preference isn’t known. Just a suggestion from an NB who passes as male and thinks binary gender substitution doesn’t help, even though I appreciate the effort.
I thought we had reached peak bureaucracy but I was wrong.
On the plus side, it's good that they finally figured out that forcing frequent password changes and forcing the usage of special characters are anti-patterns. I've been repeating this for over a decade.
Deprecating passwords is the wrong conclusion. A better solution would be to educate people about good password creation and handling practices. A 1-page document and/or short video would do.
Organizations have been educating people about good password creation and handling practices for over a quarter century. It hasn't worked and there is no sign that it will ever work.
(Perhaps I misunderstood and you were being sarcastic?)
1. No more SMS and TOTP. FIDO2 tokens only.
2. No more unencrypted network traffic - including DNS, which is such a recent development and they're mandating it. Incredible.
3. Context aware authorization. So not just "can this user access this?" but attestation about device state! That's extremely cutting edge - almost no one does that today.
My hope is that this makes things more accessible. We do all of this today at my company, except where we can't - for example, a lot of our vendors don't offer FIDO2 2FA or webauthn, so we're stuck with TOTP.